>This is the kind of dated argument that really makes me dismiss most of the critics. I was running xubuntu as my main desktop since 2010 at least, switched to Debian + nix + XFCE in 2022 and switched to full-on nixOS in 2024. I never had issues with audio then and had to go out of my way to "break" audio on NixOS when I wanted to try pipewire instead of pulse.
Did you ever do any DAW ? Did you have to use is jackd ?
Stuff like streaming games from my desktop in a non native resolution is a no-go with Wayland. I can't do HDMI 4k/120 with HDR/VRR like I can on windows (I know it's HDMI fault, but that doesn't change the fact it doesn't work).
Oh and I've given up on using Linux for productivity a year ago - one can take only so many full browser crashes for simple stuff like desktop sharing, camera/mic stopping mid call.
I'm running linux on my desktop with about as vanilla hardware as you can imagine - the amount of compromises/stuff that just doesn't work is quite annoying.
It's just nowhere near the level of reliability of MacOS - that's why I use my air for productivity and I SSH into the workstation to do actual work in VMs (with all the recent supply chain compromises no way in hell I'm ever doing dev work outside of a sandbox environment).
I've never used a device that claims first party linux support so maybe it's better.
But honestly I'm not a fan of linux desktop in general - flatpack is nice in theory but comes with so many "gotchas" and installing stuff otherwise is just "here you have all the privileges of my user". MacOS sandboxing/security scoping feels way better for desktop use.
Dealing with real-time on Linux is an issue, yes. It has gotten better with pipewire but still far from MacOS.
Everything else I have it on pretty much "just works". I am not a big gamer, but Steam works. Bluetooth works. Wi-Fi works. It detects my printer and scanner better than my wife's windows laptop. No browser crashes.
NixOS is well supported on the Framework and on my workstation. The worst type of inconveniences I have nowadays would be things like what I had some weeks ago: Zoom wouldn't find an already-running process and would get stuck in a loop, solved it by running "nixos-rebuild switch --upgrade".
Compare that with the complaints that you hear from Apple users and the constant reporting on declining quality on MacOS and iOS, and you see why I take issue with statements like "most linux users want the MacOS experience, except with more customization".
This is another underrated benefit of working with LLMs. When I work I don't take detailed notes about my thinking, decisions, context, etc. I just focus on code. If I get interrupted it takes me a while to get back into the flow.
With LLMs I just read back a few turns and I'm back in the loop.
I feel like this take misses what LLMs actually bring to the table to a senior developer.
Sure writing code was not the majority of my workday since I moved up in responsibility chain - but that's because I don't get enough uninterrupted time to do the actual coding from all the meetings, syncs, planning, production investigations, mentoring, team activities.
Now that I can delegate code writing to LLM instead of mid/junior devs the dynamics of those tasks change dramatically. The overhead of managing more junior devs is completely gone and no need to have soft skills with an LLM. And communication/iteration speed with LLM is not comparable.
Not to mention I don't need soo much time to get back "into the zone" - LLM can keep working through my meeting - when I'm back it's already working on something and I can quickly get back into the gist of it - much faster than before when I had to take a break and lost all context of what I was doing an hour later. LLM has all that context + more progress right there.
After soo many years of dev I have a pretty good idea of what I want the code to look like - no need to debate the overzealous mid dev about his over abstracted system that's going to haunt me in a production outage next month when we both forgot what he put in - I simply get to say "this is shit rewrite it how I requested". No need to "talk about latest lib that's all the rage on the YouTube blogs etc."
Sure sometimes I realize that doing stuff without LLM would have taken less time - but when I consider how many interruptions I have between my coding sessions - LLMs are empowering precisely because I get to dedicate so little time to my code.
Also less teammates means less communication overhead.
Yep and I loved when C# introduced it. I worked on a system in C# that predated async/await and had to use callbacks to make the asynchronous code work. It was a mess of overnested code and poor exception handling, since once the code did asynchronous work the call stack became disconnected from where the try-catches could take care of them. async/await allowed me to easily make the code read and function like equivalent synchronous code.
> async/await came out of C# (well at least the JS version of it).
Not sure if inspired by it, but async/await is just like Haskells do-notation, except specialized for one type: Promise/Future. A bit of a shame. Do-notation works for so many more types.
- for lists, it behaves like list-comprehensions.
- for Maybes it behaves like optional chaining.
- and much more...
All other languages pile on extra syntax sugar for that. It's really beautiful that such seemingly unrelated concepts have a common core.
Similarly F#'s computation expressions predate C#'s syntax, and there is some evidence that C# language designers were looking at F#'s computation expressions. Since the Linq work, C# has been very aware of Monads, and very slow and methodical about how it approaches them. Linq syntax is a subtly compromised computation expression and async/await is a similar compromise.
It's interesting to wonder about the C# world where those things were more unified.
It's also interesting to explore in C# all the existing ways that Linq syntax can be used to work with arbitrary monads and also Task<T> can be abused to use async/await syntax for arbitrary monads. (In JS, it is even easier to bend async/await to arbitrary monads given the rules of a "thenable" are real simple.)
> use async/await syntax for arbitrary monads. (In JS, it is even easier to bend async/await to arbitrary monads given the rules of a "thenable" are real simple.)
I tried once to hack list comprehensions into JS by abusing async/await. You can monkey patch `then` onto Array and define it as flatMap and IIRC you can indeed await arrays that way, but the outer async function always returns a regular Promise. You can't force it to return an instance of the patched Array type.
> async/await came out of C# (well at least the JS version of it).
Having been instrumental in accelerating bringing async/await to JS, it definitely was the case that it came from C# and we eagerly were awaiting its arrival in JS and worked with Igalia to focus on it and make it happen across browsers more quickly so people could actually depend on it.
> it’s now common for a handful of requests to incur costs that exceed the plan price
Pricing per turn/request was/is an idiotic model and I'm glad they are paying for it. It just forces you into a workflow just to work around business model. Heck the best laugh would be to create a plan outside vscode with interactive CC/Codex then copy paste into GH copilot to do a single session burn of few M tokens.
So far they did not change it, and none of this applies to business and enterprise accounts. My idea is that it can still be viable as most businesses will have plenty minimally used licenses with just a few power users abusing the request model.
Even by pessimistic progress projections AI will be better than most at coding before this is a long term issue. And the output multiplier I'm seeing I suspect the number of SWEs needed to achieve the same task is going to start shrinking fast.
I don't think SWE is a promising career to get started in today.
But pro-AI posts never seem to pin themselves down on whether code checked in will be read and understood by a human. Perhaps a lot of engineers work in “vibe-codeable” domains, but a huge amount of domains deal with money, health, financial reporting, etc. Then there are domains those domains use as infrastructure (OS, cloud, databases, networking, etc.)
Even where it is non-critical, such as a social media site, whether that site runs and serves ads (and bills for them correctly) is critical for that company.
you dont notice it when you are only looking at your own harness results, but the llm bakes so very much of your own skills and opinions into what it does.
If you're enrolling in uni today you're looking at 6-10 years till your career is in a good place. I'm willing to bet there will 1/10 of junior positions available in 5 years.
And insufficient talent because of retirement becomes an issue in like 30 years even with current developer demand, and I expect that demand to go down significantly over time, even with current level of capabilities.
But you have to be good at SWE to be good at security engineering and sysadmin, and the demand there is skyrocketing.
We have a completely broken internet with almost nothing using memory encryption, deterministic builds, full source bootstrapping, secure enclaves, end to end encryption, remote attestation, hardware security auth, or proper code review.
Decades of human cognitive work to be done here even with LLM help because the LLMs were trained to keep doing things the old way unless we direct them to do otherwise from our own base of experience on cutting edge security research no models are trained on sufficiently.
Something I don’t see “SWE-is-Doomed” comments address is this - SWE, I think we can all agree, is one of the more complex white collar jobs. If it gets automated, then likely most other white collar jobs have been too. At that point, don’t we have bigger problems?
Honestly PEDS are stigmatized and under-researched for the performance enhancing aspect. They have undoubtable side effects - but how much, why, etc. is kind of meh from what I saw when I was looking into this, bro science is best you can get. Few studies here and there giving people modes test boosts and measuring athletic performance.
Not saying we should be promoting them, but if we can eventually get to the point where we eliminate the really bad side effects and get most of the benefits it's going to be a great thing for everyone, the next thing after GLP-1.
I do not have the background that allows me to make medical decisions based reading published medical articles, so I have to trust my doctors advice, and seek 2nd opinions if I'm not convinced.
My issue was the disingenuous use of a "5-year post compete" monitoring as justification for Enhanced Games.
GPT 5.4 xhigh thinking was really good at teasing out problems in multi step flows of a process I was refactoring, caught higher level/deeper problems than Opus 4.6. However getting it to write the code is just not a good experience for me, it changes the style/does not follow surrounding code, codes in a sloppy way and creates subtle bugs that I don't see from Opus. So I use codex for review and opus to write code. Testing the new Opus 4.7 still to see if the review/reasoning catches more/better stuff. I frequently fire off all 3 (Gemini 3.1 pro, Opus, Codex xhigh) on same code than have them cross reference each other and stuff like that. Gemini is so bad it's not even funny, not sure why I keep it running.
I mean look at the result where he asked about a unicycle - the model couldn't even keep the spokes inside the wheels - would be rudimentary if it "learned" what it means to draw a bicycle wheel and could transfer that to unicycle.
it's the frame that's surprisingly - and consistentnly - wrong. You'd think two triangles would be pretty easy to repro; once you get that the rest is easy. It's not like he's asking "draw a pelican on a four-bar linkage suspension mountainbike..."
Wouldn't this be more about being capable of mentally remembering how a bicycle looks versus how it works?
This reminds me of Pictionary. [0] Some people are good and some are really bad.
I am really bad a remembering how items look in my head and fail at drawing in Pictionary. My drawing skills are tied to being able to copy what I see.
I think it’s difficult to draw a bike exactly because you remember how it works rather than how it looks, so you worry about placing all the functional parts and get the overall composition wrong. Similar to drawing faces, without training, people will consistently dedicate too much area to the lower part of the face and draw some kind of neanderthal with no forehead.
is it possible to have greater success with the specificity? I don't think i ever drew a bike frame properly as a kid despite riding them and understanding the concept of spokes and wheels...
But have you thought of children? God forbid they give money to illegitimate scammers when legitimate one aren’t getting enough cash from legal gambling like loot boxes.
Did you ever do any DAW ? Did you have to use is jackd ?
Stuff like streaming games from my desktop in a non native resolution is a no-go with Wayland. I can't do HDMI 4k/120 with HDR/VRR like I can on windows (I know it's HDMI fault, but that doesn't change the fact it doesn't work).
Oh and I've given up on using Linux for productivity a year ago - one can take only so many full browser crashes for simple stuff like desktop sharing, camera/mic stopping mid call.
I'm running linux on my desktop with about as vanilla hardware as you can imagine - the amount of compromises/stuff that just doesn't work is quite annoying.
It's just nowhere near the level of reliability of MacOS - that's why I use my air for productivity and I SSH into the workstation to do actual work in VMs (with all the recent supply chain compromises no way in hell I'm ever doing dev work outside of a sandbox environment).
I've never used a device that claims first party linux support so maybe it's better.
But honestly I'm not a fan of linux desktop in general - flatpack is nice in theory but comes with so many "gotchas" and installing stuff otherwise is just "here you have all the privileges of my user". MacOS sandboxing/security scoping feels way better for desktop use.
reply