Hacker Newsnew | past | comments | ask | show | jobs | submit | Fr0styMatt88's commentslogin

What I’m curious about is what lead to larks ‘winning’ in the sense that there’s this massive prejudice against night owls.

Though I have heard that there are natural biological functions that depend on the sun such that night owls who are sleeping their natural pattern are STILL predisposed more towards certain physical/mental conditions. Though who knows?


I _wish_ I could nap after hearing all the benefits, but for me it’s either doze off for five minutes and wake up feeling blah, or lie there for 10 minutes, MAYBE go to sleep for a bit and wake up feeling horrible.

If I have light to anchor my circadian rhythm, I’m happiest waking up around 5:30-6:00 and going straight through, starting to wind down at 8:30.

If I sleep later, I’ll end up shifting more towards naturally waking up around 10:30, going to bed at 11:30 PM and generally feeling not terrible but not great and slightly tired during the entire day.

Luckily the light can be artificial that wakes me up — I use smart bulbs as an alarm.


If it doesn't come naturally don't worry about it. Not everyone is the same. It sounds like the most beneficial thing for you is probably what you're already doing.

For myself my attentiveness and energy tend to slump later in the day if I don't so obviously I'm better off the other way around from you.


> I _wish_ I could nap after hearing all the benefits, but for me it’s either doze off for five minutes and wake up feeling blah, or lie there for 10 minutes, MAYBE go to sleep for a bit and wake up feeling horrible.

You just need to get used to it, then you will feel horrible if you miss the nap. :)


Have you tried the screen zoom in accessibility settings? The responsiveness is great with the trackpad gestures.

I haven't... Should probably look into it. I always hated Windows full screen zoom... But I regularly use the triple tap zoom in Android. I'll look into it next time.

From a quick Google that kinda makes sense, it’s just the strong, _sustained_ power draw that gives them issues. So I’d say fundamental AND inverter design — imagine pushing 2kW continuously through an inverter.

It’s funny, power use can be really unintuitive. Try convincing someone that using the big air conditioner for heating is more efficient than using that little plug-in bar heater. Or yeah, a power board with 8 tiny wattage wall-warts isn’t using a lot of power.

I could probably run my big fridge overnight off my portable battery generator, but it wouldn’t run my small electric kettle without putting it into a special mode and for nowhere near as long.


That doesn't make sense to me. On a cheap RV inverter maybe, but on solar for a house? The inverters should be rated to continuously output whatever the panel is generating. It shouldn't care whether the 2kW is going back on the grid or into your water kettle, it should be doing that all day every day.

Typical hybrid inverters have an output rating around half the theoretical max input of the panels. This is due to theoretical max of panel input being very rare or even impossible in normal earth conditions, the presence of an attached battery to soak up part of the input, and the general cost benefit trade off of solar equipment (more throughput means more heat, means bigger heatsinks, means heavier and more expensive).

You can definitely get equipment that can do symmetrical input/output, but if you actually model out the supply and demand curves on the system it's not usually going to be worth the extra up front expense since peak input is a small portion of the day and that extra hardware will mostly sit idle.

For that matter people often design systems where peak input can't even be accepted by the inverter and the extra power is just wasted, because it's more valuable to have a steady input over a long period than to maximize the daily peak.


Yes, my grid-tied system is like this. The panels are ~410W and each one has a microinverter with ~390W maximum or something. The more expensive inverters were not worth capturing the peak. You’re better off putting that money into more panels.

In the US, most home solar installations do not have a in-home battery. It is not uncommon for rooftop solar to be producing >90% of nominal max, for hours at a time.

I know multiple people with solar and have discussed their specs with them extensively. Zero of them have inverters or microinverters sized below the theoretical max of their array.

Are you thinking of a purely off-grid setup without actually saying so?


Nope but in a different market so makes sense, those are probably pure grid tie inverters, which I don't have a lot of experience with because it's not commonly used here. I do see the EG4 hybrid has a similar ratio (we have the same tech here under the Luxpowertek brand).

Even without a battery people usually choose hybrid, which can function on and off grid.

Also to be honest I'm mostly looking at larger inverters so maybe that colors it. Not many users here need 24,000 watts continuous outside a commercial context, for instance, so an inverter with that as an input but 12,000 watts continuous AC output doesn't seem weird since part of the 24,000 watts DC can be sent to the battery.


Ok, yeah that makes sense. Over here people usually get direct grid tie inverters, and if there's no battery, there's no reason for a hybrid inverter. The cheapest way to do it is panels -> inverter -> grid. No cutoff switch, so the inverters stop functioning if the power goes out.

Then it's just a race to pay back the panels, which are most of the cost, so undersizing the inverter is wasing energy and leaving money on the table.


On my case I have 4500Wp of panels. The inverter is sized at 4200W. The next step up 4800 or 5200 was twice as expensive adding about €600. Not sure if I ever would have made that back. I hit the maximum only a few weeks in spring.

Let's say your panels could produce 95% of nominal, 3 hours a day, 3 months of the year when the sun is in the right spot. That's 4275W, or 75 over.

0.075 * 3 * 90 is 20kWh you're leaving on the table per year. So yeah the payback time for the more expensive one would be never.

I'm seeing price differences from 4200W to 5000W inverters be more like 10-70€ though:

https://www.alma-solarshop.com/10-solar-inverters?q=Inverter...


This was in 2019 though, prices were further apart back then. But maybe I just looked at the wrong shop as well.

Also just as a follow on my assumption is it's much easier and cheaper to scale the DC side since it's often at the 400-500v range (for example 10 panels in series with open circuit voltage of 49v and operating voltage around 43v) vs the AC side in the 230v range, since resulting amperage is half. So that may account for the ratio.

Yeah it’s when you go off the happy path that it gets difficult. Like there’s a weird behaviour in your vibe-coded app that you don’t quite know how to describe succinctly and you end up in some back-and-forth.

But man AI is phenomenal for getting stuff out of your head and working quick.


I feel the exact same way about tutorials in games that try and be comprehensive and show you everything.

Incremental games do an amazing job at this (things like Universal Paperclips, A Dark Room, etc); parts of the game are revealed to you as you need them and it's often a fun surprise. I don't think the same thing is directly applicable to productivity apps, but I wonder if something could be taken from the pattern.

This is timely -- I'm coding an app at the moment and had the fleeting thought that "hey I should do a new user onboarding tour thingy" and then remembered that in general I skip them, so I havne't made one :)


> I feel the exact same way about tutorials in games that try and be comprehensive and show you everything.

For those an ingame encyclopedia and/or external wiki is a much better solution.


Thank you, I was starting to wonder.

I guess because I’m in game dev maybe, but in all my jobs knowing about the underlying stack has either been necessary knowledge or highly regarded.

I can’t think of any time in my career where knowing about the internals of the stack was ever frowned upon or where it’s been anything other than an advantage (especially when hunting bugs). I must have been lucky.


How did it get in? Isn’t Linus known for being rightfully fussy about what makes it into the kernel?

Would be an interesting story.


Linus has had been fussy about maybe like 5% of the things because even then he couldn't keep up with the sheer volume. Nowadays it's more like 1‰


I feel like it’s something more fundamental and broad than that. We slowly remove excuses to talk to other people.

The thought crossed my mind the other day — if I’m asking the AI a question, that’s replacing a human interaction I would have had with a coworker.

It’s not just in coding, it’s everything. With ChatGPT always available in your pocket, what social interactions is it replacing?

The thing that gets me is, we are meant to fundamentally be social creatures, yet we have come to streamline away socialisation any chance we get.

I’m guilty of this too — I much prefer Doordash to having to call up the restaurant like in the old days, for example.


We see this in our open-source community. We've had a community channel for over two decades, where community members help newcomers and each other solve problems and answer questions.

Increasingly we have people join who tell us they've been struggling with a problem "for days". Per routine, we ask for their configuration, and it turns out they've been asking ChatGPT, Claude or some other LLM for assistance and their configuration is a total mess.

Something about this feels really broken, when a channel full of domain experts are willing to lend a hand (within reason) for free. But instead, people increasingly turn to the machines which are well-known to hallucinate. They just don't think it will hallucinate for them.

In fact I see this pattern a lot. People use LLMs for stuff within their domain of expertise, or just ask them questions about washing cars, and they laugh at how incompetent and illogical they are. Then, hours later, they will happily query ChatGPT for mortgage advice, or whatever. If they don't have the knowledge to verify it themselves then they seem more willing to believe it is accurate, where in fact they should be even more careful.


> In fact I see this pattern a lot. People use LLMs for stuff within their domain of expertise, or just ask them questions about washing cars, and they laugh at how incompetent and illogical they are. Then, hours later, they will happily query ChatGPT for mortgage advice, or whatever. If they don't have the knowledge to verify it themselves then they seem more willing to believe it is accurate, where in fact they should be even more careful.

The AI companies have taken all the wrong lessons from social media and learned how to make their products addictive and sticky.

I’m a certified hater, but even I’ve fallen into the exact trap you’re describing. Late last year I was in the process of buying a house that had a few known issues with a 30 day close. I had a couple sleepless nights because I had asked ChatGPT or Claude about some peculiar situation and the bots would tell me that I was completely screwed and give me advice to get out of the contract or draft a letter to the seller begging for some concession or more time. Then the next day I’d get a call from the mortgage guy or the attorney or the insurance broker and turns out, the people who actually knew what they were doing fixed my problem in 5 minutes.


So have you stopped using ChatGPT and Claude?


This _is_ all true but what's also true is that there's an historical pattern (in many communities) of "n00bs" not being or (at least) _feeling_ welcome. So, I can't say I blame people for spinning in circles with LLMs instead of starting with forums or mailing lists where they may be shamed or have their questions closed immediately as "duplicate" or "off-top" (e.g. SO).

I think if we want newcomers to lead with human interactions, the onus is on us community leaders/elders/whatever need to be a little warmer, understanding and forgiving. (Of course, some communities and venues are already very good about all of this and I'm generalizing to make the larger point.)


Personally this type of behavior played a large part in why I left 2 oss communities.

A lot of the passerbys nowadays feel like trolls. They come in copy pasting chatgpt responses spamming they need help instead of chit chatting asking questions. We fix their problems, they don't trust us or understand at all. Or worse we tell them their situation is unreasonably bad and they should start over, they scream at us about how some unimaginably bad code passes tests and compiles just fine and how we are dumb.

They tell us we don't need to exist anymore in one way or another. They try to show off terrible code we try to offer real suggestions to improve it, they don't care. Then they leave the community once their vibe/agentic coding leaves that part of their code base. Complete waste of time, they learned nothing, contribute nothing, no fun was had, no ah-hahs, just grimey interactions.


I’m subscribed to a couple of mailing list and follow the archive of a few others. I wonder if the friction associated with the medium is why I haven’t seen those shenanigans?


I should look into mailing lists. That would be a great filter for the "I need it now at any cost" interactions. Thank you for the indirect advice.


I think we are going to see a large movement of designing friction in the next decade.


I have switched to OpenWRT during the LLM era. I wanted to set up some special network configs, and ChatGPT happily spit out the necessary configs.

From what little I understood from OpenWRT everything looked fine, but nothing worked. I still to this day have no idea what I (or ChatGPT) did wrong.

I just reset the router, actually took the time to do everything by the docs, and then it worked.

Debugging someone's broken code that never worked is a nightmare I wouldn't wish on anyone.


People are losing their ability to reason without prompting an LLM first.

It's affecting their ability to collaborate. They retain the confidence of years of experience, but their brain isn't going through the appropriate process anymore to check their assumptions.

I've seen a similar thing happen to engineers who move into management, but this is now happening at such a large scale.


> if I’m asking the AI a question, that’s replacing a human interaction I would have had with a coworker.

Importantly, you're removing a signal: If I'm not asked things anymore, I don't know which aspects of our domain are causing the most confusion/misunderstandings and would as such benefit most from simplifying the boundaries of.


There is a lot of wisdom in this.

At the end of the day chatgpt won't be there to hold our hands in the hospital, have a laugh over failing to pick up a date, get invited to a bbq, groan over the state of the code in utils.c, or recommend us for our next job/promotion. They say software is social for a different reason than most of these examples.

It's good to be efficient, whatever that means, but there are no metrics on the gains that get made by talking to people. In a lot of ways those gains are what life is about.


> At the end of the day chatgpt won't be there

Are you sure it won't?


Yes. 100%. Chatgpt can't get drunk with you share personal experiences grill food for you or network with humans for you. At some point certain people have to choose to live a life otherwise why have one anyways.


I think you are right, but it also makes sense. Human communication is inherently inefficient. Points of view, miscommunication, interpretation... It's the obvious point to automate. Not defending it, just my thoughts


I have a couple of colleagues that run all communication through an LLM. It really helps their writing, but it does nothing to help their understanding.

It also makes me hate communicating with them because they'll (somewhat obviously) prompt the LLM to make the conclusion they want. For example, "respond to this jira with why this isn't an issue"


Yes, fully agree. Automatic communications should always be optional in the sense that you should offer that to someone but never force

Sometimes I don't feel like having to make a phone call, but sometimes I much rather talk to a human


You could have done this with Google search or Wikipedia or reading through books though


I am rereading the Asimov robot novels. A decrease in human to human interaction is a major side effect that he has foreseen. Decreasing interaction and collaboration are some of the core themes.


Apps like Doordash have introduced me to many good restaurants which I've then visited in person.


i see what you did there :)


hahaha took me a bit to get what you meant.... Yep I've been reading LLM output a lot lately lol


It’s really really inconsistent. Sometimes select all is available, sometimes not. Sometimes the handles don’t work. Selecting text in a scrollable region is fiddly, etc.

I’ve seen an insane drop in the quality of swipe typing recently as well. To the point where I’ll often go back to regular typing. I’ve made maybe six or more corrections just to this paragraph alone.


I think swipe typing suggests words inconsistent with any higher level language model, like word tuples, when proposing words which are possible matches for letter sequences swiped.

and it drives me crazy too.

I've just had good luck it seems with text select.

Have you found any way to do a Find within a span of text on iOS? That would be very useful, but I haven't seen it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: