I'm not fond of functional languages. It isn't that I don't get why they are a really good idea (they are). I just can't stand the syntax, the lack of proper standard libraries, and the lack of mainstream-big-ecosystem'ness. Mainstream languages are nice because there is lots of code, documentation, discussion and I can find people I can talk to. And you can write real code in situations where you may not be the one to maintain it 10 years from now. Or 5. Or 2.
Agentic coding changed that. A bit.
I still dislike most functional languages because my brain doesn't work with their syntax, but these languages are REALLY good targets for agentic coding.
I'm a backend developer who occasionally needs a frontend slapped onto something. So I have been through all the usual suspects. Angular, React, Vue. All terrible reminders of why I try to stay away from the frontend. Touch it and you roll around in tons of dysfunctional tooling, weird complexity and gimmicky mechanisms that are ridiculously fragile. It isn't just as if a bunch of cats wrote the code, but they are feral cats. And if you point out just how messy things are, they just hiss at you and piss on your shoes.
And then I discovered Elm. Not only does it not crap all over my git repository, LLMs love Elm. Yes, it poops out a JS blob. But I don't have to look at it. I can just pick it up with my long tongs and drop it into my server using embed.FS in Go.
Perhaps I should overcome my peculiarities and love Elm too.
Anyway.
Anything that can make Python go away I'm for. It is not for writing programs that will ever leave your workstation and be inflicted on others.
Multiple times per week I have the same conversation. It goes something like this:
- AI will make developers irrelevant
- Why?
- Because LLMs can write code
- Do you know what I do for a living?
- Yes, write code?
- Yes, about 2-5% of the time. Less now.
- But you said you are a developer?
- I did
- So what do you do 95-98% of the time?
- I understand things and then apply my ability to formulate solutions
- But I can do that!
- So why aren't you?
The developers who still think their job is about writing code will perhaps not have a job in the future. Brutal as it may sound: I'm fine with that. I'm getting old and I value my remaining time on the planet.
Business owners who think they can do without developers because they think LLMs replace developers are fine by me too. Natural selection will take care of them in due course.
On one of my very first jobs in around 2000 I got paired with a much more experienced software engineer. He’d been a pro since the early 70s. I was stoked to learn from him.
On like my fourth day he said “now I’m going to teach you the thing that helped me the most in my career…” I waited, ready for the received wisdom. And he said “always number your punch cards so if you drop them they will be easy to put back into order”. I was upset. We were long past the point where punch cards were in use. And then he said “I said what would help _me_ the most, not what would help _you_. Software is always changing”.
Or perhaps a peek into how fast the Software-engineering is changing that what works for you now may be irrelevant in future, and hence be prepared to be adaptive!
This is a bit of glib answer. Most of the time is spent coding which encompasses typing, retyping, and retyping again. It also includes banging your head against the wall while trying to get one of your rewrites to work against and under-documented API.
OP's formulation makes SWE sound like a purely noble enterprise like mathematics. It's more like an oil rig worker banging on pieces of metal with large hammers to get the drill string put together. They went in with a plan, but the reality didn't agree and they are on a tight schedule.
Most of the time is spent figuring what the right thing to do is, not writing the implementation. Sometimes the process of writing the implementation surfaces new considerations about what the right thing is, but still, producing text to feed to a compiler is not the bulk of the work of a software engineer. It is to unearth requirements and turn them into repeatable software.
If you’re spending time thinking and not experimenting, then it’s because experimentation is expensive. With an LLM you don’t have to try to predict a complex system in advance, experiments are so cheap to can just converge to a solution directly. None of this pontificating; it’s really not that useful anymore.
With an LLM you don’t have to try to predict a complex system in advance, experiments are so cheap to can just converge to a solution directly.
We saw a similar philosophy in TDD advocacy many years ago. Search for something like "Sudoku Jeffries" to see how that went. Then search for "Sudoku Norvig" to see what it looks like when you actually understand the problem.
The idea that you can somehow iterate your way to a solution when you have no idea where you're trying to go or even which direction your next step should be in has always seemed absurd to some of us but in the era of LLMs there's no longer any doubt. In the agentic era (can we call a few months an "era"?) I estimate that 90% or more of the writing I've read about how to use agents most effectively came down to making sure there is a clear specification for what they need to implement first and then imposing extensive guard rails to make sure their output does in fact follow that specification. It's all about doing enough design work up front to remove any ambiguity before coding the next part of the implementation and almost everyone claiming any sort of real world success with coding agents seems to have reached a similar conclusion.
This is very naive and reductive thinking. Experiments have a cost, you really have to think carefully about what you are trying to learn. Even when code is cheap, traffic and time are still huge constraints, and you better make sure your hypothesis actually makes sense for your goals, because AI is more than happy to fill in the blanks with a plausible but completely wrong proposal.
More broadly, it's well understood that experiments are not a replacement for design and UX. Google is famously great at the former and terrible at the latter. Sure the AI maxxers will say the machines are coming for all creative endeavours as well, but I'm going to need more evidence. So far, everything good I've seen come from AI still had a human at the wheel, and I don't see that changing any time soon.
Even writing code the good old way, of course we experiment. I remember the old rule "Plan to throw away the first one. You will anyway." But then there's the "second system effect" where the second system is supposedly always overengineered and trying to take every possibility into account.
And then there's the times when the quick sloppy poc you planned to throw away gets forced into production and is still impossible to change ten years down the road.
AI makes all these problems so much less painful.
I worked at a company which had a huge monolithic ERP system (their product, to be clear) with no good separation between the GUI layer and presentation layer. The GUI was also dependent on an ancient version of the Borland C++ compiler. They put in a humongous effort to move to a slightly more modern UI library, and a client server architecture.
However, someone had decided that messages in xml or json were too inefficient, they already had performance issues. So they went with a binary message protocol of their own design - with no features for protocol update. Everything communicating with the server had to be on exactly the same version, or it would throw an error. So of course they very, very rarely updated the protocol.
I think the best help of AI will be to clean up such real life messes of soul-crushing architectural regrets. Will it do it perfectly, certainly not, but I wouldn't do it perfectly myself either if I was forced to do it - and I'd take a hell of a lot more time to do it.
I think you and 7e are both right. Being able to iterate some N orders of magnitude quicker is a big deal. This doesn’t eliminate design and UX. Rather, it merges it with high iteration speed to produce a form of “play”.
“Play” is what produced at least two (likely more) generations of attentive (and therefore competent) programmers. The hype around LLMs is painful, yes, but attentive human minds will ultimately bust through it.
And before long you have a solution that is made up of a thousand pieces of spaghetti that neither you nor anyone else understands. And when your solution becomes too brittle to use, cannot be maintained, or fails catastrophically, then what? Just hope that's someone else's problem?
Refactoring is cheap too, but you have to read your code and know when to stop and ask the agent to refactor, rewrite, adopt or change libs, fix issues presented by linters and code quality scanners, change abstractions and rethink the architecture.
It's never been easier to replace chunks of code with sane software patterns, but you have to have a feel for those patterns. And also understand what's under the hood.
You folks speak like the only function of the agent is to spit code and features. Get a grip and treat your deliverables with care, otherwise you only have yourself to blame, not the AI.
You actually get what you ask for. And you can ask for anything, vaguely or not.
You'll end with spaghetti if you'll play a bad manager and only ever allocate time for new features and never for cleanups.
You can go through code, add REFACTOR comments based on your tastes and thoughts, and get your result and iterate to your heart's wishes. You just don't need to do the direct code typing.
Well - you converge to a system, but do that by pruning what you don't want.
If you care about maintainability and quality (and I include maintaining using LLM based tools) then you need to understand what it does (in doing so you will find lots of things for it to fix - you'll probably find that the architecture it's chosen is not right for what you want too).
> If you’re spending time thinking and not experimenting, then it’s because experimentation is expensive.
No, because no amount of experimentation can solve many of the problems that have been solved by thinking. Even your claim about "experiments are cheap" requires thinking to decide what experiments to do. No one is generating all possible solutions that fit in X megabytes; you have to think to constrain the solution space.
I find that it often pulls a solution that is good enough for this problem today. Sometimes that is great, and other times it's just creating a pile of shit
Too many people believe that AI is going to come up with elegant solutions to problems that no one has ever solved before. Maybe someday, but for now it seems to be good at finding a solution that may be hidden away somewhere in stack overflow. If it just isn't there, then you are out of luck.
There is almost nothing new in computer programming. 99.999% of any code most of us on this forum write will be repeating patterns that have been written thousands of times before.
Tell a coding agent what your new thing needs to do, give it the absolute constraints, max response times, max failover times, and so on, tell it which technologies it has access to or could use, and then tell it to spend a lot of time going over and over the design, coming up with an initial X number of designs (I use 5), and then it must self criticise each one of them and weigh them up, narrow down to three, before finally presenting those three options to the user.
Now you read the options, understand them, realise that the AI has either converged on something very sensible, or it has missed something, so you tell it what it missed and iterate. Or it nailed something good, you pick the option you prefer, and tell it to come up with a more fleshed out high level design, describing the flow and behaviour deeply (NO CODE REFERENCES!). Then once you're happy, tell it to use that and write a comprehensive coding plan. Tell it specifically what coding patterns you prefer (you should have these in your AGENTS.md file already), what patterns to avoid (single threaded? multi-threaded? Avoid gc? How you typically deal with error conditions, etc etc).
Then have it start iteratively working on the coding plan, and it *MUST* have a strong feedback loop. If there is no feedback loop initially, I tell it to build one. It must be able to write very fluent integration tests (not just unit tests). It must be able to run the app and read the logs.
Do all this and I bet you get a better result that 80% of developers out there. Coding agents are extremely good when used well.
Glib is called for. The amount of information asymmetry that's still on the table as vibe coders and vibe engineers and vibe doctors emerge is staggering. Professional experience is still incredibly valuable. Most software developers might spend more than 6% of their time coding but no senior developers are banging their heads for hours over typos.
LLMs evaporated 90% of the "moments of despair" when you have an error and googling it isn't helping, or googling it made you realize you have to read 30min of documentation.
Coding is a joy now. LLMs shaved off all the rough edges.
A year ago I would've told my boss “can't be done” about my work today. I'd tell him to get me the right person to talk to (our partner, not an alien) who could give me some insight into what the hell I'm supposed to be doing to consume their API. Or to at least explain why it is that this can't be done.
Nowadays, I spent a couple of weeks reverse engineering their terrible ideas. Yeah, it worked. But it's a complete waste of my time, and tokens, energy, chips and RAM. And worst of all, it will lead to a terrible design.
That will work, but will eventually colapse under its own weight, as we use our increased power to increase our sloppiness and take it a little further. Because we can manage it. For now.
LLMs moved the moments of despair to PR reviews for me. It used to be that you could check on a junior dev occassionally throughout the day to make sure they're on the right track. Now you step away for 2 hours and they're raising a PR of bad code smell spaghetti and moving on to repeat their AI slopfest on the next task.
It's getting hard to keep up with trying to teach new devs what bad code looks like. And I swear sometimes they just copy my PR comments into their AI tool to fix the mistakes without any of the learning.
At some point there needs to be an uncomfortable conversation about how if all they’re doing is copy pasting everything they get from you into ChatGPT, you can do it yourself for much much cheaper.
how? management in most Tech companies are incentivizing them to do just that, so if you bring it up, they'll happily trot over to your manager to complain and then the uncomfortable conversation is you with management about why you're getting in the way of AI uptake by the team.
Don’t allow juniors to use AI. It’s like university exams: no programmable calculators allowed. Review assistants or senior who know what’s going on should though, it does help when used correctly
I've tried this without much luck. In my experience they get too bogged down on surface things and don't have the necessary business requirements/context to understand and find actual bugs.
How have you set yours up that works well for you?
So create a context document that explains the business context, and add that to the agent.
Take the bad result that you're getting, and pretend it's coming from an enthusiastic junior. What would you tell them to make them do this task better? Add that explanation to the agent (or explain that to the LLM and get it to add that to the agent, I have found this to work as well).
When you create a task for the LLM, get it to create a requirements document that lists all the requirements. Feed that into the review agent so it understands what the code agent was trying to do.
The LLM will do what you tell it to do. It doesn't magically understand what you want it to do. You have to tell it what to do.
You can't possibly believe this, or you and me (and many others) are doing something different. LLMs have created an entire new - huge - set of bang-your-head moments, as they go off half-cocked in a million simultaneous directions, chasing their tail, or just making shit up. And since the vast majority of work is on existing - often ancient - codebases, let's find out if you feel the same way in 18 months.
1. Copy-pasting code into the web chat UI and asking for something (bugfix, add a feature, refactor, explain, review it etc), including entire source code files. A $20/mo Gemini subscription goes a long way (never been rate-limited). I only use the highest model. I often just copy-paste the entire source file between 3 backticks.
2. Cursor Tab. I do have hotkeys to enable and disable it; it's disabled most of the time otherwise it gets annoying.
3. Single-file changes directly from Cursor's AI sidebar. I only do this for simple, predictable stuff because even their auto-routing "Premium" setting is not as good as pasting stuff into Gemini 3.1 Pro.
That means I have only two $20/mo subscriptions: Gemini and Cursor.
I don't use Claude Code, it's really for people who don't know how to code. I don't use Plan Mode; I make and track the plan myself (if at all). I only tell the LLM granular tasks to execute. I don't use `claude.md` or `agents.md` or anything like that. If I don't like a particular output, I reset everything, modify my prompt and try again.
I believe this is the only way to fully leverage LLMs without losing any product quality. If you're trading off quality for "speed" (in quotes because over the long term, a low quality codebase is a massive drag on productivity) then there's no point.
I _think_ what you’ve said is “go shallow, not deep”. That is, don’t let the walk you make inside the latent space a long one. Twenty-five short and peppered steps, from de novo, is better than one long, protracted stew.
Well, if it works on step one, then why not step two? Where would different folks draw the line? My grandparents might continue on a while, whereas I would not. But if it also “works” on step two for me, should I take a third?
What counts as “works” is the important bit, I think.
Yes, if you're using them to write large chunks of code or entire features. If you just use them to clear up some trivial problem in an unfamiliar technology that you used to spend 30 minutes googling with 50 tabs open, or stuff like write a method to filter, map and reduce an array based on specific criteria, they're a godsend.
Maybe I'm weird, but my usage has been very conservative. As in, I treat the LLM like a junior dev that I have to micromanage and handhold.
I am terrified of allowing these things to complete tasks end-to-end with nothing intervening. Maybe that's why I don't run into many of these issues. I mostly delegate grunt work and manual tedium, not reasoning or design choices to the LLM. I may consult the LLM and ask for criticism, but there is no way I'm going to allow it to quietly make design decisions that I don't know about.
You are in charge of what the LLM does. If it's running off half-cocked in a million simultaneous directions, that's on you. Write better skills. Tell it not to do that. Break into its loop and ask it wtf it thinks it's doing. If it's making shit up, force it to test more.
The LLM will do what you tell it to do. Manage it.
Languages have been reporting compile and runtime errors for decades. Additionally very few senior developers don't already have their minds wired to spot typos the way copy editors spot bad punctuation. Typos were only really a problem for students.
Equal? No, no no no. Upper management is making PoCs that promise to solve longstanding multi year learnings of tradeoffs and solution balancing, and setting goals based on that. We are heading to a cliff and everyone is going to learn what happens when you replace already vulnerable foundation pillars with pig iron.
100%. Googling when you don't even know enough to ask the right questions, with 50 tabs open and trying to read down to the 3rd or 4th Stack Overflow answer (which is usually the best for some inexplicable reason), was my least favorite part of development.
I don't miss wasting an hour on a problem in a technology I'm not familiar with, where it's not like a big conceptual thing but something I could clear up in 5 seconds if I just had an expert in the room.
Maybe you aren't familiar with how AI works. It writes the code for you. Nobody is "letting it mis-spell things". You run the code it wrote, it fails. You look through the code the AI wrote and find the typo it put in there, or give the AI the error for it to fix - but it still created the typo, and that is the main point here. AI often ignores the rest of the document and does what-ever-the-fuck it wants to make you stop prompting it, without any real concern for correctness.
This is temporary. What is the SKILL.md equivalent going to be in five years? In ten? You don't already see a pattern emerging around solutions to encode that "professional experience" into the tools themselves?
These LLMs can already incorporate our entire cultural corpus yet your "professional experience" is the threshold they won't cross?
The word “incorporate” is doing some very heavy lifting in your assertion. These LLMs already have access to the whole corpus of architectural knowledge and software best practices, and yet they’re unable to reliably implement those best practices. Why not? Why do they often make completely unintuitive decisions, even when repeatedly prompted to ask clarifying questions?
To be clear by that and "cultural corpus" I meant their skill with natural languages. It is well known for instance that early LLMs were curiously better at composing sentences in English than doing basic math.
Regarding such formal reasoning we have already seen marked improvement in the last year or two alone. The question is how this weighs on your prediction re their capabilities in the next two, five, ten, etc years.
What are the properties of LLMs that have convinced you that there remains emergent complexity (e.g. the “ability” to formally reason) that we have not yet seen?
There may be gains to be had in such emergence but that is not where I see the gains in the next five years. Those gains will be made by connecting LLMs more robustly with formal reasoning, which computers are already very good at. Continued iteration on connecting these right/left brain faculties could then lead to further emergence down the line.
The present notions of harnesses, structured output or looping in the LLM to some external state or sandbox be it debugger output or embedding into a runtime already show early promising results along these lines. I see no reason to believe these gains will not continue over the next five years.
If you have some theories in the converse in that regard I am all ears.
Extraordinary claims require extraordinary evidence, not the opposite. There’s no current evidence to suggest limitless progress, or even superlinear progress with regards to compute and energy. My guess would be sub linear or even logarithmic progress vs. linear growth in compute and energy, as that’s how most physical systems behave.
No one said unlimited progress. Let's not revert to straw man claims.
If you think the potential of LLMs is overblown feel free to short the market. I don't pretend to know the future. But if I may, I don't think you are framing the debate in the correct terms. Evidence is an important facet of human affairs. So is risk. Best of luck with your predictions.
Markets can remain irrational longer than anyone can stay solvent (especially when wealth is as concentrated as it is currently: one doofus can keep an entire industry afloat).
“Unlimited progress” is not a statement on the rate of progress, it’s a statement on the limits of progress. It’s a much weaker claim than you’re framing it as. Your claim very much is that we have not yet reached the limits of LLMs potential. My claim, conversely, is that we’re already reaching diminishing returns, which are being masked by a massive influx of compute and energy. My short: LLMs are not the path to AGI.
I really don't like this framing - it's hard to short a market at the best of times, let alone when governments have a vested interest in tech being too big to fail to compete in the global economic arms race - see Intel's stock in the past few months.
I agree with you both - undoubtedly there are still massive gains to be made with the frontier models we have today with tooling and iteration, yet I do not believe there's sufficient evidence to claim we are rolling towards AG/SI on an exponential curve, without some additional breakthroughs given the jagged edges and data used to train models being fundamentally linear
Just remember you don't need AGI to see massive societal change. Certainly not mass layoffs. AGI is not the bar. By the time we all agree AGI has come the world will have already changed.
You just need AI to be just good enough to win the tradeoff over a human employee. Just take your average office. Then ask yourself if the bar is really that high. AGI strikes me as an extremely nebulous concept. Better to just list everyone at your office and bucket them with a guess of how soon you think AI will replace them. Or weaken their market power. This is what every corporate boss in America is already doing. I'm merely suggesting rather than hope a graph curves in our individual favor we try to act more collectively as a species. Of course, I don't hold my breath.
I also don't find myself compelled by the notion that the danger to humanity is "AGI". The true danger is as it always has been - each other.
> Just take your average office. Then ask yourself if the bar is really that high.
How many years away do you think we are from a “concierge” AI that can do the menial tasks handled by most personal assistants / program managers? Booking flights and hotels and coordinating employee availability?
> Why do they often make completely unintuitive decisions
Most likely because you haven't constrained their behavior in your prompt. You're making the assumption that they "understand" that using best practices is what you want. You have to tell them that, and tell them which practices they should use.
They already fail consistently follow very simple and concrete instructions like “Please do not ever mock this object, always properly construct it in your tests”, so I’m not sure how they’re going to adhere to more vague and conceptual architectural paradigms. This is a problem with generative AI in general - image generation has similar limitations.
The capacity of the person prompting it to understand is the threshold they won't cross. They can squeeze the gap as much as possible by dumbing down answers or slowly ramping up information complexity but there is a limit to comprehension.
This is an interesting answer for questions about human agency and accountability/personhood questions but I don't see how it leads to increased confidence in the role of human as SWE.
If LLMs get good enough, one might be tempted to ask so what if most humans can't understand the output? Human civilization has by and large been a constant exercise in us collectively accomplishing more and more while individually comprehending less and less.
Our ancestors likely understood more about hunting live game or murdering each other than we do. Most of us do not consider that a great loss. Most of us living in the modern world depend on things we don't fully comprehend. I'm just not sure how this would lead to being reassured re the human as SWE.
We don't need as many hunters because we've domesticated sources of meat. We still need ranchers, butchers... an entire supply chain to get meat to consumers. We didn't remove humans from the loop, we just created specializations.
Software specialization might look very different in 10 years but I doubt that technically specialized humans will be completely removed from their professions. We might not be carrying bows and arrows anymore but we will be carrying the equivalent of a rope and a Stetson.
Ranchers, butchers... and factory farms. Most meat Americans consume have had very little interaction with a person until they are being devoured on the plate.
I appreciate your points. I agree with you that not all "technically specialized humans will be completely removed" but let's not pretend the comparison is going from a caveman with a spear to a cowboy with a lasso. If you concede it is likely to be very different at some point calling it SWE is no longer useful.
I think SWEs would be better off realizing they have enjoyed a relatively extreme level of privilege, and rather than trying to hold onto it, use what time they still have to advocate for a more egalitarian society, even if that means giving up some of their gains. Otherwise speaking of farming, the mass layoffs to come when software has been disrupting blue collar jobs for decades will really be a chickens coming home to roost moment.
Now you're arguing against your own analogy? Hunter was ubiquitous position in human society prior to the domestication of animals. 50% of the workforce in hunter-gather societies. Today, 12 millennia after the domestication of wildlife, that number is down to 9-14% of the global workforce dedicated to the production, distribution, processing, sales of meat (not including cooked food) according to opus.
Considering that only 1% of the US workforce was a software engineer I expect similar workforce optimization to occur in software engineering specializations over the next 12,000 years. /s But seriously, it's never going to zero.
No one said it's going to zero. It doesn't have to go to zero for lives to change. Would you rather be a cowboy or a factory farmer? The latter are some of the least desirable jobs in the entire world. The fact that millions of people still do them isn't the point in your column you think it is.
Do you really want to live in a world when nobody understand software that manages nuclear power plant? Or medical devices? Or financial software? Or radio transceivers firmware? Even something so boring like databases not understood could lead to disastrous effects if this would be the government database for managing people IDS. Hmm even if this would be working fine for years what would happen if bad actor would influence models to generate code if security issues? If nobody can comprehend the output how anybody would be able to think about the danger? This is even more grim then this
https://www.citriniresearch.com/p/2028gic
We live in a world with nuclear weapons. Somehow we all cope and get up every morning. I think you are missing the point - the world is already grim. It always has been. What about human affairs say in the last century alone makes you think human oversight is some panacea? The impetus for civilization was not some innate desire for financial systems or medicine. It was not having other humans murder you. The Leviathan is already here.
The article you shared has little to do with this. Questions of how to divide up gains technology creates are a separate question from that of the technology itself. Tbh I found what you shared so boring I could barely finish it. I already in this thread made an exhortation to support politicans who commit to erasing inequality. The idea that LLMs can only exist with inequality is nonsensical. The only thing grim about what you shared is the lack of political imagination. It's boring.
Your answer reminds me that my biggest gripe with this site and programmer forums in general is the lack of awareness of the breadth and scale of software development. I'm curious what you work on, because it doesn't sound anything like what I work on.
> Most of the time is spent coding which encompasses typing, retyping, and retyping again. It also includes banging your head against the wall while trying to get one of your rewrites to work against and under-documented API.
I don't think I've experienced this to a large degree. Maybe early in my career. Most of my time now is spent formulating a solution, and time spent coding is mostly spent trying to compose my changes with the existing code in a way that is performant, reliable and meets the specifications.
This is far more true for junior and perhaps mid-career engineers, unless you're working in an extremely well-defined problem space (* see below).
When working as a SWE, the longer I did it (~30 years) more of my time was spent understanding the problem, the edge cases, how to handle the edge cases, how to do all of it affordable, on time, and within budget.
That's engineering.
What you're describing is "writing code". That's lower value than "solving the problem".
I imagine a response, "But agile development, etc."
Yep. Part of solving the often sometimes involves creating prototypes to determine the essential viability of the solution. But that's only part of it. Which prototypes do you write? How much time do you allocate to same before accepting it's a dead end (at least for now) and punting on it?
That's engineering.
Me probably coming across as a dick today? Well, I was diagnosed autistic a year ago, and I'm on extended sabbatical/unemployment (3 years now) due to autistic burnout. And masking is part of how I got the burnout.**
* Why would someone be paying for that when there is likely someone else already doing it? Unless you're the rare person who hopes to "disrupt" the competition).
** has me begging the question of why I write here at all. SMH. Why do I do what I do? No idea sometimes.
There's the saying "Any idiot can build a bridge; it takes an engineer to build a bridge that barely stands."
To put this another way, any idiotic LLM can write code. It takes a person with domain experience to understand what code to write, rewrite, or not write.
I've seen lots of organizations hollow out their internal competence in favor of outsourcing the skills. LLMs are the ultimate expression of that. There are people who say "you need to have people in your organization who understand how things work because they're the ones who solve problems!" and there are other people who say "focus on your core competencies! These problems you're worried about aren't your core competencies, so get rid of those experts, they're expensive and annoying; we can just sign a contract with an organization that'll know things for us."
At some point we all will identify exactly how much "seed corn" you need for the next season. We'll figure that out because we're starving, but at least we'll all know.
you've definitely been doing this longer than i have, but our outlook and recent experiences sound very similar. also been diagnosed recently, also on similar extended sabbatical/unemployment, also come across as a dick, also trying to mask less because burn out.
got an email address in my profile if you'd be interested in talking at some point about something, or even talking about nothing in particular. (i don't normally do this sort of HN networking stuff, i find it super cringe. but there we go).
Let's also not forget a lot of the market edge of SWEs comes in knowing how to navigate these parts. The fact you needed to be reasonably fluent in a language was already a barrier to entry which meant in better times new grads could earn six figures at their first job just for putting in that effort.
Maybe you will still be needed. That is one question. How well you will be paid and treated when the barrier to entry is now "I can think" is another. As the parent indicates, most people doing software are not doing things akin to pure math. I don't think most SWEs want that lifestyle anyway.
It's ok. You shouldn't fight the coming change. Instead use the time we still have to fight for more equal outcomes (vote for politicians that support UBI, Medicare for all). The longer you delude yourself that you are uniquely needed in an increasingly mechanized world the worse all our outcomes will be.
The barrier to entry to generating code may be "I can think", but the barrier to entry for solving hard, distributed/multi-faceted engineering problems still remains quite high - agents can't really do this still to a decent level of efficacy reliably.
The progress models have made in the last 5 years aren't convincing me they'll bridge that gap too soon, although I can see how some people are convinced by how decent agentic harnesses make things. I know it's really easy to get very hyped with the current state of the technology, but try to have a bit of skepticism.
Try to write a design doc before you implement something (which people find they need to do for LLMs to work at all anyway). You’ll find that you spend much less time actually writing code.
Write proper API documentation laying out the assumptions and intent, generate some good API docs, write a design and architecture document (which people find they need for LLMs to work at all anyway). You’ll find that you spend a lot less time reading code.
> which people find they need to do for LLMs to work at all anyway
Everything we have to do for AI to function well, would help humans to function better too.
If you take the things for AI, but do them for humans instead, that human will easily 2x or more, and someone will actually understand the code that gets written.
> If you take the things for AI, but do then for humans instead, that human will easily 2x or more, and someone will actually understand the code that gets written
This only works on high-trust teams and organizations. A lot of AI productivity gains are from SWE putting the extra effort because the results will be attributed to them. Being a force-multiplier for others isn't always recognized, instead, your perfomance will likely judged solely on the metrics directly attributed to you. I learned this lesson the hard way by being idealistic, and overestimating the level of trust that had been built after joining a new team. Companies pay lip service to software quality, no one gives a shit if your code has the lowest SEV rates.
Ah… that’s a reasonable point. Yes, the difference between a high-trust team and what you described is night and day. I suppose for those situations there’s a much bigger incentive to just throw AI at it, which explains why the big corporates love AI.
Getting the code into a state where it actually does what you want takes time - but a lot of that is research, testing, experimentation, documentation, etc. Those can be faster with AI assistance but you still need to bang on it enough to make sure it works right.
I am not, yet actual coding is miniscule part of workflow. The rest is cca un-automable by any llm - politics, meetings, discussions, brainstorming, organizing testing teams, stakeholders and so on.
This is how big corporations look like, not some SV startups.
I agree. And that stuff is soul destroying. I have done it, and right now I work in a place a little smaller, but we get so much done without all the cruft. And we get it done better. I spend much more time writing code now (*) than at the big corps, and we do a much better job because we can iterate.
(*) Well, now claude spends a lot of time writing code, I spend a lot of time designing and steering it. Claude can write remarkably sophisticated code with the correct steering.*
>OP's formulation makes SWE sound like a purely noble enterprise like mathematics. It's more like an oil rig worker banging on pieces of metal with large hammers to get the drill string put together.
Those two formulations represent different developers' approaches to the same task. The former being developers who are much better at planning than the latter.
There are also those for whom that percentage is higher, let’s say 6-50%.
> I understand things and then apply my ability to formulate solutions
The AI is coming for that too.
You might just be lucky to be in circumstances that value your contributions or an industry or domain that isn’t well represented in the training data, or problem spaces too complex for AI. Not everyone is, not even the majority of devs.
People knocking out Jira tickets and writing CRUD webapps will end up with their livelihood often taken away. Or bosses will just expect more output for same/less pay, with them having to use AI to keep up.
Agree. It is just like 2 totally separate groups are arguing.
One very tiny slice of speciality/ rare industries where code is critical but overall small part of project costs. I can see if code / software is 5% of overall cost even heavy use of AI for code part is not moving the needle. So people in this group can feel confident in their indispensability.
Second group is much larger and peddling CRUD / JS Frontends and other copy/paste junk. But as per industry classification they are just part of same Coder/Developer/IT Engineer group. And their bleak prospects is not some future scenario, it is playing out right now with tons of them getting laid off. And whole lot of people with IT degrees, certifications are not finding any jobs in this field.
We've had reasonable effectiveness for CRUD. It's mainly the UI toolkits we use, but the plumbing it can do quite well. It's not 100% vibecoding but certainly a significant accelerator for parts of the job.
I agree with the 2 separate groups theory, but I don't buy that the group that produces "copy/paste junk" is the much larger group. I think in most mega-corps, there is a huge existing code base, there are huge organizational challenges, and there is huge hierarchy with most people not being the junior juniors. 90+% of the work is "not coding." Probably way, way more if we include the middle managers. At startups, there is a lot of "copy/paste junk" but also often a decent amount of push the boundary new stuff. I don't know. I've been in the industry for 8 years now and it's been really rare to see the actual coding being the bottleneck or even the thing that takes the majority of the time.
I don't mean this as a snarky jab. It's coming for anything software. I've used AI to accomplish front end development and reverse engineer proprietary USB hardware dongles in C, then rewriting the C into Rust to get easy desktop GUIs around it. Backend APIs, systems programming, embedded programming, they all seem equally threatened it's just a matter of time. Front end is easy to see in the AI web front ends but everything else is still easy pickings.
You are describing the toy projects that had us all amazed end of last year. Large, maintainable software that can serve paying customers is in a completely different galaxy.
There's rather a big difference between reverse engineering already working code and forward(?) engineering working code from nothing so that confidence seems misplaced.
As a manager of a full stack team, we've found AI falls short a lot more on front end. It has its weak points on both front and back, but the problems with backend are quite easy to feed back into it -- needs more performance, needs to pass this security audit, needs to deal with xyz system. The problems with frontend are more like this is ugly, it's clunky to use, people don't like it. People without years of frontend experience tend to lack the vocabulary required to get AI to fix it, period, and it ends up going around in loops.
> I've used AI to accomplish front end development and reverse engineer proprietary USB hardware dongles in C, then rewriting the C into Rust to get easy desktop GUIs around it. Backend
That is not hard. It’s just tedious and very slow to do manually. The hard part would be about designing a usb dongle and ensuring that the associated software has good UX. The reason you don’t see kernel devs REing devices is not because it’s impossible or that it requires expert knowledge. It’s because it’s like counting sands on the beach.
Whether something is tedious depends on the person and situation. If you're already an expert, you may find a lot of work that goes into your 4th USB device (especially if it's based on yet another chip and bespoke SDK) quite tedious, since lot of it is based on standard requirements/designs that you can't change.
You may also find re-ing stuff not tedious, due to what may be motivating you.
In any case, any work will have some things you just know how to do, or what to do, but previously (before LLM agents) no easy way to plow through them without pressing a lot of keyboard keys over long period of time.
It is irrelevant that complex frontend would be easy for AI or not. To me 1) how many unique complex frontends are needed out of total frontends that millions of sites out there need. 2) Will there be increase in need of such frontend engineers so other displaced folks can land a job there.
I think it will be far fewer to have any positive impact on IT engineers' overall job prospects.
But that's equally true for any type of system. Frontend isn't inherently easier than other systems, so i was just wondering why you singled it out. To me AI just seems better at backends and database design
OK, my examples seemed like biased against frontend which was not the intention.
The thrust was overall job prospects for people in software field. It is not that frontend is easy but it is definitely easy to get into. Considering there are far more frontend developers then say C++ system engineers or database designers so in sheer numbers they will be affect more.
Ah okay that's fair. In my country boot camps aren't a thing so frontend devs are rare and good frontend devs even more, so I think it depends on where in the world you are. We got an abundance of java devs here that i fear more for
There are periods of time where I might spend 80% of my time "coding", meaning I have minimal meetings and other responsibilities.
However, even out of that 80% of my time, what fraction is actually spent "writing code"?
AI can be an enormous accelerator for the time I'd normally spend writing lines of code by hand, but it doesn't really help with the rest of the work:
- Understanding the problem
- Waiting for the build system and tests to run
- Manually testing the app to make sure it behaves as I'd like
- Reviewing the diff to make sure it's clear
- Uploading the PR and writing a description
- Responding to reviewer feedback
There are times when AI can do the "write the code" portion 10x faster than I could, but if it's production code that actually matters, by the time I actually review the code, I doubt it's more than 2x.
>AI can be an enormous accelerator for the time I'd normally spend writing lines of code by hand, but it doesn't really help with the rest of the work:
- Understanding the problem - Waiting for the build system and tests to run - Manually testing the app to make sure it behaves as I'd like - Reviewing the diff to make sure it's clear - Uploading the PR and writing a description - Responding to reviewer feedback
What part of those you think it doesn't help with?
They can make it unnecessary for you to understand.
Consider hash tables. Nobody implements a hash table by hand any more.
I've written some, but not in this century.
Optimal hash table design is a specialist subject. Do you know about robin hood algorithms? Changing the random number generator's seed to discourage collision attacks?
A basic hash table starts to slow down around 70% full. Modern hash tables can get above 90% full before they have to expand.
Who keeps Knuth's Fundamental Algorithms handy any more? I own both the original edition and the revised edition. They're boxed up in the garage. I once read that book cover to cover. That was a long time ago.
That's not AI. That's solving the problem and putting it in a black box. That's how technology progresses.
That's obviously not what I'm talking about. If you're asking an AI to write an optimal hash table algorithm, something is clearly wrong. I'm talking specifically about understanding the business domain and problem you are trying to solve.
> That's not AI. That's solving the problem and putting it in a black box. That's how technology progresses.
The key word is solving. Meaning someone, after coming up with the solution, has taken times to prove that it works well in all usual and most extreme cases. With their reputation on the line.
That’s why you trust curl, ffmpeg, Knuth’s books,… but you don’t trust random cat on the internet. We don’t trust AI and the cost to review its output is not a great tradeoffs compared to just think and solve the problem.
That may be true I’m not gonna say one way or the other, but if AI comes for that then almost all knowledge work is effectively dead, so all that’s left would be sales or physical labor.
I wonder though, can AI make the next JS framework. I mean that in sincerity, there was the leap from jQuery to React for ex. If an AI only knows jQuery and no one makes React, will React come out of AI.
I think it won't be like assembly, because it takes more information vs building blocks that have more dense information in them, kind of like how we use libraries and frameworks
Yeah that's my thing for my hardware projects, I'm not going to reach for an LLM to do it, I want to write the code myself/be present. For something new I would consider using LLM to generate something, like a computer vision implementation or something I don't already know. The end result I would know how it works, just for POC.
It can't. Framework hierarchy is largely based on social structure, rather than pure technical merit. Otherwise React would've been displaced long time ago.
People didn't leap from jQuery to React. It's a lot easier to imagine an AI looking at jQuery and [insert any server side MVC framework] and inventing Backbone.
The history of the last 250 was moving from agriculture to industrial work to service work. Now the last frontier is starting to be overtaken by automation too.
(And in all of those transitions millions where left behind without work or with very worse prospects. The people that took the new jobs were often a different group, not people who knew the old jobs and were already in their 30s and 40s).
And what would be the new professions that uniquely require humans, when even thinking and creative jobs are eaten by AI? Would there be a boom of demand for dancers and chefs, especially as millions lose their service jobs?
> The history of the last 250 years is inventing new professions as old ones are automated away.
Even if this still holds true ("past performance is no guarantee of future results") the part about it that people handwave away without thinking about or addressing is how awful the transitional period can be.
The industrial revolution worked out well for the human labor force in the long term, but there were multiple generations of people who suffered through a horrendous transition (one that was only alleviated by the rise of a strong labor movement that may not be replicable in the age of AI, given how it is likely to shift the leverage of labor vs. capital).
If you want to lean on history as an indication that massive sudden productivity changes will make things better for humanity in the long run, then fine, but then you have to acknowledge that (based on that same history) the transition could still be absolutely chaotic and awful for the lifespan of anyone who is currently alive.
This is the kind of sleep walking that’s about to walk humanity into the next dark ages.
My parents say a lot of stuff like this. They tend to gloss over the untold suffering, great depressions and world wars that took us to get here.
The planets resources were also not in risk of running out. As the world is min maxed by billionaires, it nice the lower classes are drained of all capital, they will soon move to fighting each other for resources. the future is looking pretty grim for even the most optimistic of scenarios
Doordash and similar are experimenting with autonomous/remotely operated vehicles and porn is getting decimated once good enough uncensored video gen ai gets available. That doesn't sound like viable career choices either.
It's happening, but theres no law of the universe that says it has to be 1:1. Why are you so confident in this regard? 250 years is a very small slice of human history and could easily be the outlier.
Yes, but if/when that happens, it won't just affect software engineers. An AI that can do that can replace any white collar worker.
> People knocking out Jira tickets and writing CRUD webapps will end up with their livelihood often taken away.
I'm not sure anyone is actually working on those. People talk about spending all day writing CRUD apps here, but if you suggest there are already low code tools to build those, they will promptly tell you it's too complex for that to work.
>Yes, but if/when that happens, it won't just affect software engineers. An AI that can do that can replace any white collar worker.
Yes. Yes, that's exactly what we're going to see, and more swiftly than people are generally comfortable with. What are we going to do with all those cubicle dwellers?
a new paradigm of 3 day work weeks. share the salary of the days off with those less automatable, and work to automate everyone.
I wish some sort of dicussion like this could happen where the workers of the world get to see some of the gains of a new technology more immediately.
If "the state" wants to maintain legitimacy and protect its citizenry (one of the primary promises of a state) to avoid a period of social unrest, the likes of which has been unheard of for several generations, I think something like this should at least be a part of the discussion.
I don't think it will happen. For one thing, it hasn't in the US since the introduction of the forty hour work week in 1940.
But beyond that, I don't think most people want a three day work week. They would rather work five days and get the extra money. I worked at a company that did government contracting. We had a couple quarters without much in the way of orders, so instead of laying people off like you'd normally see in that situation, the company decided to go to a four day week, with a commensurate cut in pay.
I was thrilled, as a young single guy, to get Fridays off. I rented a room in someone's house and hit my monthly nut in about two weeks. But most of the people I worked with hated it. Some of them quit. A lot of them both needed the money and also had no idea what to do with themselves on that extra day.
It is the lament of every generation of humans to think that they are the pinnacle of everything that has come before, we are just at the start of the so-called AI era, many very smart people coming up still haven’t really got their hands on all of the material available from a hardware and software standpoint. We are still at the early stages.
I am very optimistic. I just wish I was younger to take advantage like Junior high, high school age with my current resources damn… The oldest lament in the books.
There are people and companies out there releasing entire vibe coded projects and for some upwards of 80% of the code they develop is AI-assisted/generated. Since around the end of 2025 and models like Opus 4.6, the SOTA has gotten good enough to work agentically on all sorts of dev tasks with pretty good degrees of success (harnesses and how you use them still matters, ofc).
> There are people and companies out there releasing entire vibe coded projects and for some upwards of 80% of the code they develop is AI-assisted/generated.
I mean this is just fingers-in-your-ears "LA LA LA I CAN'T HEAR YOU!!" stuff.
I still have a job so AI hasn't taken that yet. But the suggestion it's "no closer" is ridiculous. At least in my life/career/office this last 12 months seems to have been a real inflection point in how AI is being used for software development.
It feels its just around the corner. But when you turn 20th corner and its still behind the next one, maybe things are a bit different than they seem / clueless emotions make us believe.
Long term its bleak, but short/medium term - not so much, if I get fired it won't be llm replacing me but rather company politics, budget changes etc. Which was the only real (very real) risk for past 15 years too, consistently. But it helps to not work for US company.
5+ years in the software world is like 30 years in others...So...given lacking use-cases and humongous amounts of capital already wasted on chatbots...It's more like "we" are closer to closing curtains than to "just started"...
Discovery of the best solution in a problem space is not generative but only verificative. Meaning: the LLM can see if a solution is better than another, but it can't generate the best one from the start. If you trust it, you'll get sub-par solutions.
This is definitely an agent problem instead of an LLM problem. Anybody got something explorative like this working?
>> I understand things and then apply my ability to formulate solutions
> The AI is coming for that too.
If this is true, then you'd have to conclude that AI is coming for everything. I'm still not convinced by that. But I am convinced that the part of software development that involves typing code manually into an IDE all day is likely gone forever.
It really doesn't have to come for everything to feel like it's taking everything. If it eliminates 10% of white collar jobs over the next decade, the impact will be felt everywhere.
I struggle to understand the logic (in general, the way people are talking), normally efficiencies come with increases in production and scale and use-cases.
So of 10% of lawyers get AI-d away, let’s say, the remaining 90% are 1.1x+ efficient and also up against other lawyers enjoying the same… work might go up. And on the customer side there is sooooo much BS with lawyers, but if both lawyer and customer can communicate faster or better with the LLMs, we should see more better cases with better dialog and case handling. Again, the total amount of lawyering could go up a lot. And then we have the cases prohibitive without the LLMs, now possible for big money. Better LLM empowered lawyers should be able to create new and more lawyer work.
As it stands I see people selling services that are subsidized by VC, template jobs we’d be doing faster with copy paste but it’s not copyright infringement when OpenAI does it, and a rush for valuations to soak up VC because the business model isn’t there. I’m seeing a huge uptick in visual bugs on large commercial platforms and customer facing apps, and don’t feel OpenAI is gonna kill Office anytime soon… or Chromium… or Steam… or emacs…
Call me an optimist, but I think those LLM pump and dumpers are creating a wave of fear that would be quite different if they weren’t lying and trying to boost an IPO. Chat GPT 2 was too dangerous to release, lul, and the class action suits are just getting started.
An actual lawyer replacing tech company should sell lawyering for infini-money, not pens that’ll totally 10x your lawyering (bro).
And what do those 10% of lawyers do? Every other industry also got reduced by 10+%, its not like they have a job elsewhere.
So.... they just starve in the streets?
Even if some other, arguably better job comes along, would they retrain for it? (You can say yes, but take a look at the long history of people choosing to join a cult and vote for an orange moron instead of learning a new skill).
Either you're convinced you won't be too badly affected and will gladly watch huge swaths of people suffer, or you're deluded enough to think that it will really, truly be different this time. In any case, I hope you get the worst results of what you preach.
Even if AI advances continue, for quite a while there's likely still going to be the 'Steve Jobs' role. That is, even if AI coding agents can, in the future, replace entire teams of SWEs, competently making all implementation decisions with no guidance from a tech-savvy human, the best software will likely still involve a human deciding what should be built and being very picky about how, exactly, it should externally behave.
I don't know if it makes sense to call that person an SWE, and some people currently employed as SWEs either won't be good at this or aren't interested in doing it. But the existing pool of SWEs is probably the largest concentration of people who'll end up doing this job, because it's the largest concentration of people who've thought a lot about, and developed taste with respect to, how software should work.
This matches what I'm seeing. I've been building software for a long time, but building more now with AI than I ever could with a traditional team. But the throughput that's helpful is from knowing what to build and what tradeoffs matter. The AI doesn't have that. It's a force multiplier on experience, not a replacement for it.
Yes, AI is coming for solution formulation, absolutely, but not all of it, because it is actually a statistical machine with context limit.
Until the day LLMs are not statistical machine with a context limit, this will hold. Someone need to make something that has intent and purpose, and evidently now not by adding another 10T to the LLM parameter count.
> because it is actually a statistical machine with context limit.
So are humans.
Machines have surpassed humans by magnitudes in many capabilities already (how many billion multiplications can you do per second?)
And I argue that current LLMs have surpassed many of my capabilities already.
For example GPT/Opus can understand and document some ancient legacy project I never saw before in minutes. I would take a week+ to do the same and my report would probably have more mistakes and oversights than the one generated by the LLM.
AI advocates are _way_ too confident about the nature human cognition. Questions that have been debated by philosophers and cognitive scientists for decades are now "obvious" according to you people, though you never provide any argument to support your statements.
We are not pre-trained using the summary of all human knowledge over all of history. Yet we make certain decisions with much more ease.
We are much more limited, but we fundamentally work differently. Hence adding more parameter like certain companies are doing isn't necessarily going to help. We need to rethink how LLM work, or how it work in tandem with something that's completely different.
I think it's doable, I just don't believe it's LLM, and I don't think anyone now knows what it is.
That is not what the education system does. That's an obvious distortion of reality. People train over billions of documents to statistically predict the next word to gain and understanding of language. LLMs do this statistical processing in order to mimic humans natural language learning ability. And there has been continued evidence of the limitations of this approach to accurately mimic the totality of human cognition.
>Machines have surpassed humans by magnitudes in many capabilities already (how many billion multiplications can you do per second?)
Do you have any idea how many calculations it takes for a human to put a ball through a hoop while running across a court?
It could be millions or billions in a second. Manifesting consciousness, coordinating body movements, and everything else all at the same time takes calculations.
You may not be aware that your brain is doing multiplication, or any other kinds of math, constantly, but it is.
I agree we do some marvelous things in sports but if we extrapolate from this table tennis robot, it's clear machines can/will do just as well there too:
That table tennis robot is not conscious. That table tennis robot does one thing well. A human is capable of far more. There is far more going on for a human playing table tennis than a robot. It doesn't matter if the table tennis robot plays table tennis better, it can't also play hocket, soccer, football, basketball, chess, polo, baseball, or many other things one human can do.
The human condition is nothing but a massive amount of calculations under the hood. You don't feel it, or understand it, but it's there. Everything in nature is math, every physical phenomena has a cause and effect rooted in mathematics, and it's no surprise that humans are great at subconsciously calculating myriad things on-the-fly, as life is happening around us.
Yours is a “God of the gaps” argument. You will remain technically correct (the best kind of correct!) long after the statistical machine has subsumed your practical argument, context limit and all.
I fall into the "pessimistic heavy user" camp, I burn thousands of $ worth of SOTA tokens monthly but it just makes me more acutely aware of the limitation and amount of work I need to do to work around them and what kind of decision that I should reserve to myself instead of trusting the LLMs to do.
I can give you the exact mathematical formula used to statistically optimize the output of a neural network from input examples. Can you do the same for the brain?
To some degree yes, in practice, not so much. In practice you have to be in the world, talk to people, know how to talk to people, know how to listen, and be able to understand the difference between what people say and what they actually need. Not want. Need.
This was something I learnt in my very first job in the 1980s. I worked for someone who did industrial automation beyond PLCs and suchlike. He spent 6 months working in the company. On the factory floor, in the logistics department, in procurement, in accounting and even shadowing the board. Then he delivered a proposal for how to restructure the parts of the company, change manufacturing processes, and show how logistics and procurement could be optimized if you saw them as two parts in a bigger dance.
He redesigned the company so that it could a) be automated, and b) leverage automation to increase the efficiency of several parts of the business. THEN, he started planning how to write the software (this was the 80s after all), and then we started implementing it.
Now think about what went into this. For instance we changed a lot of what happened on the factory floor. Because my boss had actually worked it. So he knew what pain points existed. Pain points even the factory workers didn't know how to address because they didn't know that they could be addressed.
I was naive. I thought this was how everyone approached "software projects". People generally don't. But it did teach me that the job isn't writing code. It is reasoning about complex systems that often are not even known to those who are parts of it.
And this is for _boring_ software that requires very little creativity and mostly zero novelty. Now imagine how you do novel things.
> People knocking out Jira tickets and writing CRUD webapps will end up with their livelihood often taken away. Or bosses will just expect more output for same/less pay, with them having to use AI to keep up.
You make it sound like it is a bad thing that certain tasks become easier.
I spent a lot of time writing CRUD stuff. Because the things i really want to work on depend on them. I don't enjoy what is essentially boilerplate. Who does? If you can do the same job in 1/20 the time, then how is this a bad thing?
It is only a bad thing if writing CRUD webapps is the limit of your ability. We don't argue for banning excavators because it puts people with shovels out of work. We find more meaningful things for them to do and become more productive. New classes of work becomes low-skilled jobs.
If you have been doing software for a while, you are probably doing some subset of this. But these things are hard to articulate. It is hard to articulate because it is not something we think about. Like walking: easy for us to do, hard to program a robot to do it.
>To some degree yes, in practice, not so much. In practice you have to be in the world, talk to people, know how to talk to people, know how to listen, and be able to understand the difference between what people say and what they actually need. Not want. Need.
1 person needs to do that. The other 100 not doing that currently to begin with, but doing the AI-automatable work?
We used to say that (not long ago, even) about the code-writing part. Why do we believe that LLMs are going to stop there? Why do we think they won't soon be able to talk to people, listen, and determine what they need? I think it's mostly a cope.
If they can do those things they can effectively replace any white collar job. That’s about 45% of the workforce. Societies tend to collapse around 25-30% unemployment.
Imagine 45% of higher than average paying jobs gone.
If that happens we’ll either figure out a new economic system, or society will collapse.
Also saying robots are walking just fine is misleading for any definition of just fine that is anywhere near as good as a human.
Look at how the billionaires are talking about AI: Their clear, unambiguous goal is basically to replace all white collar "knowledge" jobs. And there's currently nothing regulatory that's stopping them--they just need to wait for the state of the art to improve. Once AI is "good enough" if it ever is, they won't even think twice about 45% unemployment. What are we unemployed workers going to do about it? There's no effective labor organization left. Workers have basically no political power or seat at the table. We're not going to get violent--the police/military are already owned by the billionaire class. We're just going to eventually become economically irrelevant and die off.
> We're just going to eventually become economically irrelevant and die off.
As harsh it may sound, it seems rather likely to me. It is not like s/w engineers have helped struggling workers in other sectors other than sanctimonious "Learn to code" advice. So software folks can't expect any solidarity or help from others.
The fundamental issue isn't unemployment due to automation, but the fact that society cannot benefit from unemployment.
It should be something for us to celebrate, because it means greater freedom for humans to pursue something else rather than spending time doing drudgery.
Put it another way, the issue is that resources are not shared more equitably. This is especially egregious considering that LLMs are trained on all human knowledge. We've all been contributing to this enterprise, and what we may end up getting in return is unemployment.
45% of folks sitting on their hands are going to have the free time to talk, and this group of people are skilled at organization. Are you planning on throwing your hands up and passively accepting whatever comes your way?
And at least in the US they have >45% of all the small arms weaponry. There is no bunker strong enough nor private army big enough if 100M people come for you.
They're probably be betting that the technology they will need to defend their bunkers, think autonomous kill-bots or whatever, will emerge before people start to riot.
Or they're planning to build an Elysium-like colony in the ocean or space, to keep the billionaire class far from danger.
I get that it is popular to hate billionaires these days, but realistically, they did not get to be billionaires by being stupid. It runs directly counter to their own interests to induce anything like 45% unemployment. They will get poorer, the world they live in right along with the rest of us will get noticeably shittier, etc.
More likely they figure out what to do with a bunch of idle talent. Or the coming generation of trillionaires will.
It's important (and calming) to understand that since the Industrial Revolution started ~250 years ago, we've automated away most jobs several times over, while employment levels have stayed pretty constant.
"Automating half the jobs" is the same as "double productivity per worker".
When the doubling happens in 5 years rather than 50, it might be more disruptive, but I'm convinced we're on the verge of huge improvements in human standard of living!
What in the current state of world affairs outside of IT do you think is indicative of that potential for huge improvements in human standard of living?
If we double productivity per worker, we have twice as much wealth on average.
I know there are angry people convinced that this will all be consumed by billionaires and jews, but historically that is not at all the track record of the last 250 years, and I expect that to continue.
If you are going to bring up history you should really look into what it took to redistribute wealth from oligarchs in the past.
The fact that oligarchy now has more resources than ever in the history of humankind, a means to mass surveillance and generating mass propaganda, those wealth redistributions are looking much MUCH harder to accomplish.
Yea, historically it will inevitably happen. Realistically it will be after the new version of fuedalism and dark ages. So strap in for the next 400 years aren’t looking too good
>If we double productivity per worker, we have twice as much wealth on average.
That's not true. There are other factors at play such as demand.
If we make the average IT worker twice as productive, that doesn't mean now every IT worker is being paid twice as much, because most users aren't going to care if there are twice as many options on the app store, or twice as many bug fixes per release.
Consumption in a society will always be roughly equal to production.
There are differences due to import/export balance, investments, government borrowing etc, but as a first approximation, if GDP increases by 10%, consumption will rise by a similar amount.
About your IT worker example:
Let's say s/he produces $150k/year in value and is paid $140k. If AI makes them produce $300k of value, they may not automatically get a raise. But it becomes very attractive for another employer to hire them for $200k or $250k, or even $280k.
In the medium/long term, I don't see why wages would keep proportional to produced value.
We never noticed how easy the code writing part had already become because it happened slowly. Through mechanical means, through the ability to re-use code, and through code generation.
Heck, even long before LLMs about 10% to 30% of my code was already automatically generated. By tooling, by IDLs and by my editor just being able to infer what my most likely input would be.
> We have robots walking just fine now, by the way.
I don't think you got the point I was trying to make.
True, but I guess I see a distinction between scaffolded/templated boilerplate or autocomplete and actual application logic. People have generated boilerplate from templates for ages, as you say. RoR maybe a pretty good example, but there wasn't even early-days AI involved in doing that.
>> We used to say that (not long ago, even) about the code-writing part. Why do we believe that LLMs are going to stop there? Why do we think they won't soon be able to talk to people, listen, and determine what they need?
Because they are currently "generative AI" meaning... autocomplete. They generate stuff but fall down at thinking and problem solving. There is talk of "reasoning models" but I think that's just clever meta-programming with LLMs. I can't say AI won't take that next step, but I think it will take another breakthrough on the order of transformers or attention. Companies are currently too busy exploiting the local maxima of LLMs.
Walking was given as an example of "hard to program a robot to do it" by GP. Well, now we have robots that can walk.
What evidence is there that LLMs have hit a ceiling at being able to do things like talk to users or stakeholders to elicit requirements? Using LLMs to help with design and architecture decisions is already a pretty common example that people give.
Something like five to ten years ago, when AI hype was starting to hit media, one of the claims was that AI would come for middle-management first. Since middle-management can generally be described as collecting information from underlings and reporting information to upper management, their work was supposed to be easy to automate with AI. As far as I can tell, this hasn't proven to be true at all, and we software engineers proudly wrote ourselves out of work by constantly publishing our source code and discussing it openly.
I agree with the statement and think a lot of people miss this, but I also wonder how many people probably don't care for good, they only care for 'good enough'.
No, I never believed in fully automated tale by Tesla, but as the LLMs improve my personal estimate for the date of human-level AGI is rapidly moving to "present". Before GPT-2 I had it somewhere in 2100, at GPT-2 I thought maybe by 2060 if we are lucky. Now I think it is 2035 or maybe even sooner.
I like to see the optimism, even if I don't share it. I think it's incredible hubris that humans think we are about to reinvent our own level of intelligence, just because we made a machine that talks pretty.
Your own comment in my timeline is 7 years out of date. GPT-2 talked pretty, that was its whole thing. If you are trying to claim there's no difference between 5.5 and 2 you are delusional (hallucinating?).
I think I was fairly clear, I said that I think it is hubris to think what we have created is anything even slightly like human intelligence. It talks very pretty (a lot of work has gone into this aspect in particular), and it does demonstrate the extent to which, as individuals, most of us do not have especially unique thoughts nor problems to solve. It exposes how quickly humans jump to anthropomorphizing pretty much anything.
Is it a handy tool? Yep! I use it every day. But it is laughable to think this is the path to AGI. The most common counterargument on HN is some variation of "but you can't prove that this isn't just like how a human thinks". A conspiracy theory at best, just reinforcing the fact that we know very little about how even simple non-human brains function.
You do you. I stick to the simplest reasonable definitions. From my perspective we are already in AGI, just the intelligence isn't quite on human level yet across the board.
I am yet to see anyone saying it's just like human, so it looks like you are mostly hallucinating that too.
You didn't address my point on GPT-2 vs 5.5. Your only relevant claim is that 5.5 talks very pretty vs 2 just pretty I assume. Well, you have to be blind to claim this is the main difference.
>> Or bosses will just expect more output for same/less pay, with them having to use AI to keep up.
Anecdotal evidence to support this.
I work with both dev and design teams. Upper management has already gone through several layoffs and offshoring of the two dev teams I work with. The devs they did keep were exactly what you said. The capable ones who reliably closed their Jira tickets. Never missed a deadline for building their features or components. And now? Their work has tripled and now the only help they get from management? "Start to figure out how to leverage AI, we're going to be a in hiring freeze for the next 10 months."
The double whammy of losing onshore team members and not getting any help from management to fix the problem they just created and essentially just telling them to figure out how to use AI to keep up is pretty staggering.
I would echo what one of the devs told me, "If this is the new "AI era" than you can count me right the fuck out of it."
I agree in principle, but I think the 2-5% estimate is extremely low. I could be sold on most developers spending ~25%, up to 40% of their time on code. But very few people are spending 2% of their time on it. Unless you're some sort of super senior staff / advisor to the CTO at a gigantic company, which has already placed you on rare terrain.
Most people overestimate how much time they spend "writing code".
I interviewed a ton of people in my career and when I ask "how much time writing code on your last job?". The more junior the person the more they would overestimate the time writing code (Some would say 90%!). Once they joined I was able to see how much time they really wrote code and it is almost never more than 30%.
Mostly because the code is only the final output. You spend most of your time doing research, talking to people. Working on Quarterly OKRs, going to meetings etc.
If you just write code you are either an extremely junior person that works on things trivial enough to not have to research or your are disillusioned and you don't realize you spend most of your time doing other things
If you're reading this and that matches your experience as an IC SWE whose job is ostensibly developing software.. you're either trapped in a very atypical org, or you're heading for a PIP.
I know a an accomplished CS professor, ACM fellow, cited in Knuth's TAOCP (as well as being an easter egg!), who still hunt-and-pecks. In fact, hunt-an-pecks incredibly slowly.
While this is a witty reply, most people are working on corporate CRUD apps. For us, I still follow Jeff Atwood's advice from a 2008 blog post: "We Are Typists First, Programmers Second" Ref: https://blog.codinghorror.com/we-are-typists-first-programme...
I've always told my Jr Engineers to "think twice, code once".
If I gave them a task and they immediately started typing it out, I would tell them to stop typing and ask them to explain to me what they were doing; they'd often just spit out what they thought the code should do, and I'd often point out edge cases they missed and would have missed had they just spit out code and a PR, wasting everyone's time. I would also insulate them from upper management to give them time to actually think (e.g. I wouldn't be coding so they could think then code).
To your point and to the GP's point, and one point I keep raising with LLM's: "typing is not where my time sinks are"
That's very true, which is why I find it insulting that so many AI proponents use the word "typing" to refer to writing code. It carries an implication that if you enjoy writing code by hand, you enjoy a mindless activity.
A former colleague of mine used to work for a boss who would periodically stick their head into the office where the programmers were and yell "I can't hear typing! Why are you not working!?".
The reason I just remembered that is that the other day they proudly announced that everyone in their company would now be vibe-coding exclusively.
I agree in some ways, but I think this also overlooks that your job might be like that, but most decidedly “developer” jobs are not all like what you say which is more Engineering. Many people are able to have a career making basic HTML website changes. Are they not developers? Will their job not potentially be replaced by an AI that can make that change in seconds?
It’s weird that people always seem to argue the extremes when reality is jumbled mess in the middle. Will developers lose jobs to AI? Without question. Will many “developer” jobs be eliminated because of that? Without question. Is it probably a really bad time to think you can go from your retail job to fixing people’s website as a lifetime career move? Yeah, probably not the best idea. Would it be smarter to focus on becoming a “Software Engineer” instead of a “Developer”? Yes usually. Does that mean it is a bad idea for EVERYONE to choose to become of developer? No, and that would be a dumb thing to argue.
We’re still going to need developers and definitely engineers, we are just going to need less of them in their current form, just like we needed less saddle makers, farriers and blacksmiths. We didn’t stop needing Horse Mechanics, we just needed less of them because we needed Car Mechanics. Some of those skills transferred, some didn’t.
I remember being that kid in high school who ran math and logical problems hard which contributed to me being very technical and to learn to push through painful mental challenges on the regular. Out of most of my graduating class there were not many of us that went on to become engineers for a reason because it isn't easy work by any means and I'm guessing is quite draining for people who don't use their brain like we do.
So while AI will change the industry I don't see any reputable company firing the smartest ones in the room for junior level intelligence.
Even with it advancing someone has to be responsible for when it screws up which we know it will.
This answer makes two big assumptions that haven't been proven out yet.
- Understanding code without writing it is as viable as understanding code that you've worked with directly or indirectly
- Businesses care that you understand code
I really doubt the first one. Traditionally, understanding a code base in large part came from working with it intimately and building that muscle memory. The idea that understanding code by reading it is as good as understanding it from writing it, in my opinion, is not realistic.
Whether businesses care that their engineers (which they are increasingly viewing as monkeys at LLM typewriters) to understand the code remains to be seen. I don't think they particularly care whether their code runs slow and is buggy so long as it works just enough to churn out features and continue to pull income.
> The idea that understanding code by reading it is as good as understanding it from writing it, in my opinion, is not realistic.
As one of those developers who has written almost no significant code by hand since November 2025, but has produced a great deal of working software, I still understand the majority of the code I've produced just as well as if I'd typed it myself.
I may not be typing it myself, but I'm manipulating it constantly. It's not as simple as "reading" it - I'm reading it, executing it, figuring out refactorings for it, having tests built for it, having documentation built for it, sometimes writing that documentation myself, spinning up example scripts that use it, then building new code that depends on that previous code.
It's that act of exercising the code that gives me confidence that I understand it.
On the surface it sounds weird - why would this be?
Possibly because building a system is not a one-shot step, but a process of many iterations, each of which involves experiments in production, and gaining more learnings. So at the end of the process, you don't just have N lines of working code, but also N lessons learned along the way. So presumably with the AI process we miss out on half the value.
Now the going thesis is that this extra value is unnecessary if we take the plunge and don't look back. My gut says the answer is somewhere halfway, I guess we'll see.
Isn't the long term trend just that we don't need as many engineers, not that there will no more software engineers?
Theres another, different loop I keep seeing which is:
- Company A lays off engineers citing AI efficiencies
- People say its because of over hiring during 2020
- Company B lays off engineers citing AI efficiencies
- People say its because it was never a good business
- Company C lays off engineers citing AI efficiencies
- People say its because theres a recession
I guess to cite a counter example, unemployment is still super low, software jobs are still holding up, but the bear case is that eventually 5% of people will be able to do what people do today, and the demand for software won't grow at the same pace.
If company A is Amazon, company B is Ubisoft, and company C is Oracle, then I think it's very likely there isn't any pattern or "loop" here and it's legitimately just 3 different companies in 3 different situations doing layoffs for 3 different reasons but all 3 reaching for the same PR playbook. "We're leveraging AI to increase productivity" is the new "we're streamlining our business and focusing on our core products".
I laughed when I read this, but there is something to it. I like to say "intellectual relaxation" or take a break. Sometimes getting up from your desk to do some mindless admin task like photocopy a document for HR can free up your mind. If we were line workers at a factory, this would be mandated breaks. Business/Financial newspapers and factory executives love the old quote: "With robots, they never need a break, never need holiday, and can work 24x7." With the advent of agentic LLMs, a tiny fraction of that reality is leaking into the white collar world.
It's definitely theoretically possible, but not there yet. I use cursor, claude (opus 4.7), and several proprietary LLMs/LLM frameworks at my job. The institutional knowledge I have wouldn't fit in the context window, and AIs lack my mental index/intuition of where to look for answers. When my AI makes a PR, I generally have to make some important changes, without which it's solution would be fundamentally broken. AI also cannot be trusted to make the right business tradeoff decisions.
Many things at my software engineering job are like this, which require constantly changing human institutional knowledge that is almost always undocumented, or changing so quickly that it isn't relevant anymore. By the time you decide to automate it, the process changes. Tribal knowledge used to be something I hated seeing senior engineers keeping to themselves, but now it seems like an asset.
It sounds plausible to me since this is pretty en par with most other engineering disciplines. I’m a civil engineer. My responsibility is ultimately mostly to produce a constructable plan set. I spend far less than 5% of my time drafting or modeling.
For those who claim to be developers who code no more than 5% of their time and resort to arguments like "we're already not writing machine code by hand for 50 years, how is AI different from a higher level language?", it's not commenting, it's shilling for the AI corpocracy on HN.
>> "we're already not writing machine code by hand for 50 years, how is AI different from a higher level language?"
I never got that argument. Compilers are formally proven, deterministic algorithms . If you understand what compiler does, you can have pretty good idea what it will produce. If it doesn't do that, its a bug. Definition of correctness is well defined by semantic equivalence.
LLMs are none of that. Its a fuzzy system that approximates your intent and does its best. I can make my intent more and more specific to get closer to what I want, but given all that is just regular spoken language its still open to interpretation. And all that is still quite useful, but I don't get the assembly language comparison here.
Because compilers are only deterministic when using ahead of time compilation, without profiling data, and always the same set of compiler flags.
Introduce dynamic compilation, profiling data, optimization passes, multiple implementations, ML driven heuristics, and getting deterministic Assembly output from a compiler starts to get harder to achieve.
You are right about that but that's talking about what you generate but not what the output does. My point is that the compilers still designed to preserve semantic equivalence. semantic equivalence makes sense here because there are semantics well defined for both input and output. That bit is supposed to be deterministic. If something breaks that that is a bug.
I just don't think comparing with compilers is a good argument.
If you spend 95% of your time on that stuff, you better be working on like critical infrastructure where nothing can go wrong, otherwise you are in an incredibly dysfunctional company.
I agree it would be absurd for it to take 95% of your time.
I have, however, seen that it takes a lot more time than one would think.
I did some contracting work for a severely dysfunctional meeting heavy organization and it was about 2 hours of meetings for every hour of real technical work!
Ah yes agreed, if it's more than 90% it just signals to me that a developers skills are probably being wasted too much on business/coordination stuff.
But i guess if we mean actual time tapping your keyboard making code, then it's true some days for senior+ devs, but definitely not technical work overall.
Even when it’s not dysfunctional, you spend a lot of time on communication and reading stuff other people wrote (including code). It’s very rare to work in isolation.
I guess it depends on what you feel coding is. To me it's the architecture planning and reading other people code, not just writing code. If we say it's just typing, then 95% is not absurd no
> it depends on what you feel coding is. To me it's the architecture planning and reading other people code, not just writing code
And that would be where we disagree. I don’t read code to look at code. When I’m reading code, I’m looking for the contracts to follow when interacting with a system. It would be nice if it were documented, but more often than not you have to rely on code.
It’s very rare that I plan with a technical mindset. Yes I use the jargon, but it’s all about the business needs. Which again create contracts.
Same with writing code. Code is like English for me. If I don’t have a clear idea on what to write, I stop and do research (or ask someone). But when I do, it’s as straightforward as writing a sentence.
Huh? So you you don't research if something is technically feasible before you promise your stakeholders a delivery time/ price estimate?
We all do the same stuff, the disagreement would just be what you feel coding is and if you think technical work is the same thing or a superset. If you as software dev aren't hands on with planning or working more than 5% of your time, you are basically a PO with a programing hobby
> So you you don't research if something is technically feasible before you promise your stakeholders a delivery time/ price estimate
I believe 99% of requests are not about what’s technically feasible. And the rare time I encountered one of those, my answer has mostly been “you don’t have enough resources to try solving that problem”.
If you know your fundamentals well, very often you will find the same common blocks everywhere. People much more smarter than me has solved a lot of fundamental issues and it’s rare that I see a business request that doesn’t reuse the same familiar stuff.
That’s why coding is mostly boring. You follow the same pattern again and again. But what dictates the flows are the business parameters. And that’s why most senior spend so much time gathering good requirements. Because the code is straightforward after that.
The least experienced developer writes the most code. Juniors would be spending whole day in the IDE, typing, testing, typing etc.
Senior developers will go to a park for a few hours, think, then come back spent an hour or less typing code that just works or write nothing at all, maybe even delete code.
Instead they might update documents, ask clarifications about found edge cases or errors in planning that were not considered.
Since software is in every industry of man, I think you'll need to mention which industry this perspective is coming from. This is definitely NOT the case in certain industries.
The perspective here is "lifetime career", so you need to project out 30 years here, for a meaningful argument.
I think, much sooner than that, you'll have AI pumping out practically complete implementations that meet the requirements of function, set by the people who desire that function. THOSE people will be the developers, and will be more akin to technical "creatives", more on the product side, than the developer side.
Someday people are going to get tired of "programming in English" with prompts, getting inaccurate output, etc and someone is going to invent a higher level kind of CODE that allows the user to directly specify the actions the computer should take to solve the problem. Later someone will invent a kind of tooling that COMPILES these CODES into a runnable thing skipping the prompt part all together. It might be called something like Unified Prompt Language.
Programming is moving to programming by stated and understood intent, rather than syntax. Maybe contracts/legalese, but definitely not compilable code. Sure, some people will compile code, and more than some will be reading generated code, but that will be increasingly exceptional.
In 2000 I learned about this old technology called "neural networks".
AI really depends on long winters and rare breakthroughs. Deep neural network was the most recent breakthrough.
The iterations you currently see it just adding more storage, but the fundamental neural network structure doesn't change.
I'm confident AGI will not be achieved by the LLM architecture, and when the next AI breakthrough is, is anyones guess. But if you take history into account, it will take a while.
Yes, same. In the late 90s through early aughts then I was taught over and over and over again that neural networks were a dead end concept and would never amount to anything.
Just like all the preceding AI booms, this one will hit its maximal point, the hype train will fizzle, the best parts will just become "normal", and then a couple of decades later something new will come to push the boundary again.
We switched to 'software engineer' to encapsulate that, I think. You can receive requirements and churn out code or you can go up a level and think about the solution. Go another level up and think about the problem. Another level and it's the context of the problem. Further than that and it's the priority of it. And even higher up is how it fits in the product roadmap and the architectural decisions.
At some point you stop developing and start weighing up the requirement against your understanding of the system and the environment it works in.
There's an old Chemistry joke, that I've reapplied to Software Engineering, and it goes something like:
A New Engineer (NE) shows up on their first day on the job, notebook in hand ready to learn. They get assigned to shadow an Experienced Engineer (EE) for their first day.
EE: Now, the thing is, for any project on our team, you only need to change about 3 lines of code.
NE, preparing to write down notes: Which 3?
EE: Well, it depends.
(Originally about Material Safety Data Sheets, and there only being 3 relevant lines on them).
I think this is what people miss about Software Development.
LLMs also can “understand things and apply their ability to formulate solutions”. There is nothing that will inherently limit AI from doing all knowledge work (and all physical work once robotics is good enough).
Of course developers could just move up the “next level of abstraction” and become managers of agents who write the code, but eventually AI becomes a better manager of agents than even the best humans, at which point there is no contribution a human can make that an AI model
or system of models couldn’t do better.
> There is nothing that will inherently limit AI from doing all knowledge work
Resources is one. Energy, water, cost. There seems to be diminishing returns in intelligence at the moment, whilst power and memory usage continues to go up.
> Natural selection will take care of them in due course.
While you are seemingly not at the moment, some day you might be at the receiving end of that "natural selection" in ways that seriously impact your remainint time on the planet.
In that case you might reconsider your stance, and especially question how natural is the selection of a few powerful rich people depriving others of their way to earn a living and their way to draw meaning from their lives.
The AI revolution keeps getting compared to the industrial revolution, but people keep forgetting the consequences of that one.
I'm not terribly worried. The reason I am not worried is that software isn't my only marketable skillset. That is deliberate. Even though I see myself as primarily a software engineer, in the past decade I've worked in areas that tend to be viewed as wildly different strata and domains.
And if the apocalypse comes, I'm actually not that bad at a handful of skilled blue collar jobs.
The people who should be worried are the ones with narrow skill-sets and no capacity for dealing with rapid change. Especially if those skills are shallow too.
But I wasn't talking about people. I was talking about companies. And the reason I'm not worried about companies going under is that they have gone all the time since the start of the industrial revolution. Yes, it happens faster and more violently today than before but neither the churn nor the reasons are all that new. They just need to be understood so you can deal with change rationally and without panicking.
It is a good idea to read up on historic innovation/disruption cycles and realize that they are nothing new. The only reason people think this is a new problem is that 50-100 years ago they used to take about as much time as your productive career. So people wouldn't need to understand how to deal with it. And every generation would be convinced that this is some unexpected and unique upheaval that only their generation has to deal with.
My stance is the only one that works well during disruption: you make sure you have more legs to stand on and you don't waste time fretting over things you can't change. If you find yourself out of options, you can only blame yourself.
Usually that means you're already a senior developer, understanding things and formulating solutions is part of work delegation.
Now those juniors whose job is to implement those solutions, they will have a hard time.
On my 50s, I also don't write as much code as I used to, even less nowadays with serverless, managed services, low code/no code tools, agent orchestration workflows, and with it I keep seeing development teams getting smaller.
Well said, the only flaw is the unfortunate realization that "I understand things and then apply my ability to formulate solutions" is rarely required, how many zombie corps are still roaming these days?
Judging by how many day to day tech products in my life are buggy, slow or user-hostile there can't be more than 50-100 tech companies actually innovating, right?
Weird. I call myself a developer because I don't have an engineering degree from an abet certified engineering program.
I recognize, in some capacity, that this isn't the norm and in the US "professional engineer" is protected and not simply "engineer", but it feels akin to stolen valor to me.
If there were a license in the US for it, I’d agree with you. But as is, if you are “doing” engineering, you’re an engineer.
If you are a licensed engineer of some kind, you’d state that outright.
The equivalent of stolen valor would be claiming to be a licensed software engineer; except there is no such license so it would also be fraud, misrepresentation, etc.
> If there were a license in the US for it, I’d agree with you.
Yeah, that is basically the thing in my country. You can't call yourself an engineer without passing a test, but I can't take it because there isn't one for software engineering.
Same thing for freelancing. Freelance jobs are defined in a list, and other jobs cannot benefit from the simplified tax rules that freelancers enjoy, but that list was written before software development was a thing.
I'm a software dev in the US and I never call myself "engineer" in that capacity. Always "programmer" or "developer".
I agree. Engineers have to clear a much higher bar. Even though my career was spent in medical diagnostic software where we had to get 510k clearance, I was still keenly aware that this was a fundamentally different activity from actual engineering.
I'm an electrical engineer that moved to software engineering and there's a lot of commonalities between what I do now and what I did previously as an electrical engineer. The bar might seem high, but that's the only way I know how to work, honestly.
On the other hand, with the modern division of labour in a lot of companies and with the rhetoric I see here in HN and in other places: a lot of developers are indeed not even close to being engineers.
> Natural selection will take care of them in due course
Wonderful articulation. There's a plethora of prognostication about how AI will change everything in software and beyond and the thing I keep thinking is, well, when will the talk stop and the demonstration of results commence. It doesn't seem to have as yet.
If it works, it'll work. The methods will spread and quickly be accessible to everyone, and progress will go on. That's great.
If it doesn't work, we'll also see that in the absence of real results. And simply stating you are seeing it doesn't qualify. It must be something we can all see and use that is unavoidably, undeniably real.
> I'm getting old and I value my remaining time on the planet.
It's an interesting sentiment. I, too, am getting old and value my remaining time on the planet, and so I code by hand every chance I get. :) Luckily I'm in a position to be able to do that.
You’re a ”developer“, i guess, but not a coder (anymore), which is what your interlocutors are probably asking about. You’ve migrated to a middle manager job, not something they probably can just start doing competently. Essentially you’re agreeing with their initial sentiment, that coders will be made irrelevant.
And most of the time the statistical aspect of LLMs result in a less creative solution that is more expensive to run and harder to maintain. LLMs at this stage are good at scaffolding, generating the boilerplate you do not want to write and glue things together quickly. It just makes engineers faster.
It is indeed exciting (for you at least). The problem is for most people is not that AI is spewing out code and reading documentation while developer do more interesting things. It is that companies are handing over the job of those developers to AI itself.
So those ex-developers are free to do most interesting things in the world with little change of not relying on nice, steady paychecks every month.
I dunno, man. I've been doing this for 20+ years and I think we're at a really important fork in the road where there are two possibilities.
The first is that AI is achieving human-level expertise and capability, but since they're now being increasingly trained on their own output they are fighting an uphill battle against model collapse. In that case, perhaps AI is going to just sort of max out at "knowing everything" and maybe agentic coding is just another massive paradigm shift in a long line of technological paradigm shifts and the tooling has changed but total job market collapse is unlikely.
The other possibility is that we're going to continue to see escalating AI capability with regard to context, information retrieval, and most importantly "cognition" (whatever that means). Maybe we overcome the challenges of model collapse. Maybe we figure out better methodologies for training that don't end up just producing a chatbot version of Stack Overflow + Wikipedia + Reddit. Maybe we actually start seeing AI create and not just recreate.
If it's the latter, then I think engineers who think they are going to stay ahead of AI sound an awful lot like saddle makers who said "pffft, these new cars can only go 5 miles per hour."
I'll also add another factor: it's become increasingly clear at our company that AI-enabled humans are getting to the bottom of the backlog of feature ideas much quicker. This makes the 'good ideas' part of the business the rate limiting step. And those are definitely not increasing with AI, beyond that generated by the AI churn itself ("let's bolt on a chat experience or an MCP!")
So maybe the coding assistants don't get a 10x improvement any time soon, but we see engineering job market contraction because there aren't really enough good ideas to turn into code.
Yes, but as the price of getting work done goes down, a lot of companies that were priced out of custom software before now can hire devs, as the value hiring a few can provide just goes up. Fewer people per product, absolutely. No more teams of 10 or 20 working on the same thing. But there's so much out there that doesn't get done at all because you'd never be able to afford it.
Simple marginal thinking: When you lower the price of something, it gets more use cases. A rich person might not take even more flights because they are cheaper, but more people will consider flying when they wouldn't have at old prices
You are supposing that AI is achieving human level expertise and capability is a given. I am not so sure. Right now that's much further from the truth than one might think at first glance.
LLMs know nothing but are great at giving the illusion that they know stuff. (It's "mansplaining as a service"; it is easier to give confident answers every time, even if they are wrong, than to program actual knowledge.) Even your first case seems wildly optimistic. The second case is a lot of "maybes" and "we don't know how but we might figure it out" that seems like a lot to bet an entire farm on, much less an entire industry of farms.
We sure are looking at a shift in the job market, but I don't think it is a fork in the road so much as a Slow/Yield sign. Companies are signalling they are willing to take promises/hope to cut labor costs whether or not the results are real. I don't think anything about current AI can kill the software development industry, but I sure do think it can do a lot to make it a lot more miserable, lower wages, and artificially reduce job demand. I don't think this has anything to do with the real capabilities of today's AI and everything to do with the perception is enough of an excuse and companies were always looking for that excuse. (Just as ageism has always existed. AI is also just a fresh excuse for companies to carry on aging out experience from their staff, especially people with long enough memories/well schooled enough memories to remember previous AI booms and busts.)
But also, yeah if some magic breakthrough makes this a real "buggy whip manufacturer moment" and not just an illusion of one, I don't mind being the engineer on that side of it. There's nothing wrong about lamenting the coming death of an industry that employs a lot of good people and tries to make good products. This is HN, you celebrate the failures, learn from them, and then you pivot or you try something new. If evidence tells me to pivot then I will pivot, I'm already debating trying something entirely new, but learning from the failures can also mean respecting "what went right?" and acknowledging how many people did a lot of good, hard work despite the outcome.
Embeddings are still mostly just vectors into n-dimensional K-means clusters. It isn't "knowing" two things are related and here's the evidence, it is guessing two things are statistically likely to be related, based on trained patterns, and running with it without evidence.
It has no "semantic understanding" as we would define it. It's just increasingly good at winning cluster lotteries because we've increased the amount of training data to incredible heights.
Can you explain how you "know" two things are related? If I ask you the similarities between a cat and a dog, is your answer based solely on an understanding of their genetic phylogeny and how those genes express traits?
Grouping vectors in concept space is exactly how you create semantic understanding. The proof is in how good they are at creating semantically valid text. The fact that it took massive amounts of data is irrelevant. That just shows how much knowledge is encoded in all our language. It takes humans a ton of training to know things too.
We don't know that. It seems like great hubris to declare we know how the human brain works. You are asking me to explain how we know things and then telling me we've already figured it out in the same breath, and that's hilarious.
It doesn't take massive amounts of language data to train a baby human. It is almost entirely just: "Look. Here's a cat. Can you say cat? Cats go meow." "Over here, your aunt has a dog. Dogs go woof."
There's generally a flood of non-lingual contextual data in such moments such as sights, smells, sounds, movements, touch but that also only further underscores how different LLM training is from anything we'd consider human learning. Our memories aren't just "conceptual spaces of linguistic topics", they are complex sensory maps where a smell can remind you of the first dog you ever met. There is so much of our human knowledge that is not and never been encoded in most of our languages.
The fact that LLMs take massive amounts of linguistic data is relevant, because it shows how far we still have to go in barely scratching the surface of how the human brain seems to work. (Which again, we know only the barest details. Anyone who tells you they know 100% of how the human brain operates so far tends to be a snake oil salesman.)
We do mostly know how the brain works at this level of detail, and it is akin to Principal Component Analysis. There are only so many ways it could work, unless you believe in dualism. My question was rhetorical. All you've described with the other stuff is a "multi-modal" model (and ignoring all of the "biological pre-training" that took place through millennia of evolution). The interesting (and perhaps surprising to some people) thing is how well pure text training can compensate for the lack of other senses.
Do you think the latter can be achieved with the LLM neural network architecture? I highly doubt it. Neural networks are very old tech, and it took us that long to get us here.
I'm sure we'll reach AGI at some point, but looking at AI history, I don't see that coming any time soon.
I had a professor at my CS university (one of the greatest I had) who used to say (in 2008): "a developer should write no more than 5 lines of code a day"
The problem is people think AI can replace the 95-95% that isn't code too. That's where we end up with massive unusable codebases that no one understands.
> Business owners who think they can do without developers because they think LLMs replace developers are fine by me too. Natural selection will take care of them in due course.
Thing is, natural selection will take care of you at the same time. Because you'll also come to rely on products they make, or services they offer, either directly or indirectly. So eventually, you too, will suffer the consequences of the enshloppification.
That doesn't hold because the goal for executives is to increase revenue and the main sales pitch of Anthropic et al is to pay for agents instead of paying for engineers. That means 80% of the workforce is out no matter what. Whether or not one belongs to the remaining 20% is a different story, but obviously not all of us will be there.
> I understand things and then apply my ability to formulate solutions
Anyone read posts like this and picture someone who doesn’t actually do anything all day besides posture in meetings? Probably with a super inflated title and salary.
I doubt this is what the OP does, but there’s tons of developers like what I described and they seem actively proud at not actually building anything and playing politics all day.
You miss the major factor in your compensation: pricing pressure due to supply/demand.
By removing all the junior engineers, you've fundamentally changed the market forces longer term and most people expect that to negatively impact you in the supply demand curve regardless of whether or not the statements you've made above are true, which they most likely are for senior engineers.
In removing junior developers, leaving only senior developers, wouldn't that reduce supply, making the price go up, not down? It's been a while since Econ 101 for me though.
This is exactly it. The speed of light has not changed: we're limited by our ability to understand the system, and make decisions about what to do next. AI will speed that up, but the core work is the understanding and decision-making.
Saying otherwise is sort of like reducing the task of writing a novel to typing.
Something missed in that computer science was a highly theory driven discipline where people were taught how to think critically about solving complex problems. Industry complained they weren’t teaching enough programming skills, so they dumbed down the thinking part and emphasized the vocational part. Now the vocational part is virtually useless, and the grounding of theory applied to complex problems is suddenly really relevant again. Schools will take time to retool their programs, teaching staff, and two generations if not three graduates will have entered into a work environment that doesn’t need what they learned.
As someone 35 years into my career I agree this is the most exciting part of my career. I love programming and I do it all the time but I do it by reading code and course correction and explaining how to think about the problems and herding cats - just like working with a team of 100 engineers. But the engineers I’m working with now by and large listen, don’t snipe me on perf reviews, aren’t hallucinating intent based on hallway conversations with someone else, etc. This team of AI engineers I have can explain to me their work, mistakes, drift, etc without ego and it’s if not always 100% correct it’s at least not maliciously so. It understands me no matter how complex the domain I reach into, in fact it understands the domain better than I do, so instead of spending a few months convincing people with little knowledge or experience that X is a good idea, I can actually discuss X and explore if it’s a good idea or not and make a better informed decision. I’ve learned more in these discussions than I’ve learned in decades of convincing overly egoistic juniors and managers to listen to me about something I’m an industry authority on.
However I see very clearly we will need very few of the team of 100 human engineers I can leave behind in my work. Some of will be there in a decade, but maybe less than 1:10. This is going to be a more brutal time than the Dotcom bust for CS grads, and I don’t think it will ever improve. Mostly because we simply won’t need the “my parents told me this makes money” people, just the passionate folks remain. But even then, we face a situation where the value of any software developed is very low because so much software is being developed. It’s going to turn into YouTube where software that is paid for is very small relative to the quantity of software developed. We already see this in the last few months with the rate of GitHub projects created. If the value of any software created is low, the compensation of the creator will be low unless they’re very rare talents.
This is kind of country specific, in many European countries the kind of university various in years and content required for a degree, depending if they focus on vocational or more generic high education.
This is a valid perspective, but I don't think a useful one.
Being able to produce code is a huge unlock for many non-programmers. So in a way, it doesn't matter how much time existing developers spend on coding. It's about helping anyone become a developer.
Yeah coding speed was almost never the bottleneck I found. AI now does the typing and (some) of the thinking. It doesn’t figure out what needs to be built and how it all plays together (yet).
Saying being a programmer is about writing code is a bit like saying being an artist is about drawing lines on a canvas.
Yeah technically drawing lines on canvases may be an very important part of being a painter, but it is hardly the core of what makes or breaks great art.
I spent 2nd half of my 30y career fixing organizations and process where this was the case. so many things are wrong in places where this is the case (or alternatively you need a different job title :) )
What you described are senior developers and system architects.
Junior developers spend most of their time writing code (when they're not forced to attend pointless standups, because Agile/blah/blah)
> The developers who still think their job is about writing code will perhaps not have a job in the future.
So you're saying the same thing everyone else is saying. SWEs won't go away, but they will be greatly reduced, because those whose job is about writing code -- junior devs -- will be replaced.
(How will Sr Devs in the future be created? That's the question, isn't it.)
As an extreme example, maybe we’ll see long-running internships and trainings like doctors experience. Doctors don’t start their career until ~12+ years of prep and training.
Pragmatically, software development has a lot of examples of teenagers making apps and college students building software companies. In the 12 years it takes for training, low-knowledge workers could be vibe coding continuously replacements of most commercial software products they’d be hired to build. So I doubt we’ll treat software development as a rarified high skill job.
This is a bit of a strawman. When people say "writing code", they don't necessarily mean [pressing the keys on the keyboard that produces the necessary bytes in a text file].
Note that just because you know the job is understanding things, the manager who'll boot you and leave you without income probably doesn't. They'll just get their political cookie points for saving money by replacing you with AI.
>- I understand things and then apply my ability to formulate solutions
- Well, and AI can do part of that too, maybe more of it soon.
- ...
- Besides, you don't need 10 guys in a team to do that. A couple of them will do, then AI will do the coding. What will happen to the rest?
- ...
> Multiple times per week I have the same conversation.
Really? I mean, good on you if it's true and you like the attention but that's sounds like an implausible amount of interest in someone and their relatively mundane profession.
In my community almost all problems are political. "Problem solving ability" matters if you are HFT, but everything else? Math can't tell you the best way to use land, educate a kid, what to pay for healthcare and how, how to prioritize biotech research, set a minimum wage, decide congressional maps, all sorts of stuff that actually I pay for or care a lot about. in fact I think you are totally misinterpreting what people are saying to you, you are 200% wrong: the 2-3% of your time spent coding was the valuable part, and your so called problem solving ability rarely solved any real problems.
I think the future is pretty up in the air in this respect, but my guess is that AI will just lead to another shift in the set of knowledge that a 'real programmer' is expected to have. I'm old enough to remember when people would make fun of web developers for 'programming' using HTML and JavaScript. And of course, back in the day, you couldn't be a real programmer unless you wrote assembly language. In a few years' time, being able to write (as opposed to read) source code in any specific programming language will probably become a niche skill. The next generation will be able to read Python to about the same extent that I can read x86 assembly.
Perceptions of what knowledge counts as 'low level' are constantly shifting. These days, if you write C, you're a low-level, close to the metal programmer. In the 70s, a lot of people made fun of Unix for being implemented in a high-level programming language (i.e. C) rather than assembly.
Kind of ironic, given than OS implemented in high-level programming language trace back to 1958, with JOVIAL being one of the first systems programming languages.
Pure wage workers should consider dropping the attitude about how tech progress will just make their inferiors in the same line of work be out of a job (hrmph good riddance etc.). Because this pseudo-progress could creep up on them as well.
Then you won’t have this just world of the deserving workers at all. Just formerly deserving workers and idiot billionaires like Musk (while the robots do all of the work).
This an example of survivor bias dressed up as general advice that doesn't consider the entire ecosystem. And we need look no further than what's happened in Hollywood with writing in particular.
The general progression of a Hollywood writing career is from PA (production assistant), which often starts off as a volunteer "intern" position, to writer's assistant. Assistant here usually means doing any meanial task anyone wants from fetching drycleaning to taking a dog for a grooming appointment. When you're a writer's assistant, you will oten spend time in a writer's room. You will see how the process works. You probably won't contribute anything but you may get feedback on tehings you've written from whomever you're working for.
The next step is as a staff writer. You will be paid to produce scripts and stories for a TV show, for example. That writer's room will have a head writer. On a TV show the head writer is almost always the showrunner. The showrunner is effectively the leader of the entire project and is responsible for breaking up a season intoo storylines and making sure those scripts make sense as a collective. They might one or more of those scripts or maybe not. The showrunner will hire directors for each episode.
The path from staff writer to showrunner often goes through being a producer. Producers are responsible for a lot of the logistics of filming a show. Hiring extras, finding locations, coordinating stunts and costumes and making sure the director has everything they need.
As part of all this, in the 22 episode TV era, writers would often end up spending time on set while the show is being filmed. They'd learn from the process.
Every part of this was necessary. Those writers on set are your future producers and showrunners.
So what's happened in the streaming era is that writer's rooms got smaller (so-called "mini writer's rooms"), maybe only the showrunner is ever on set, the writers have stopped working by the time filming even begins and you might only be doing 8-12 episodes. On a 22 eipside season, that one job could support you. 8-12 episodes can't.
But you see how this all breaks down when writers can no longer support themselves, they're no longer being trained to be future producers and showrunners, there's no feedback from set back to the writer's room and you end up with 3 year gaps between seasons. The only reason for all of this is because it's cheaper.
So, you may be a staff engineer who tech leads dozens of other engineers. You're not formally a manager or director but you have a lot of influence about the entire project. But how did you get there? You started as a junior engineer being told what to do. You got to see how other leaders operated. You became responsible for more and more things. You might start fixing bugs under supervision to managing a feature then an entire project and so on.
So what's going to happen here is (IMHO) we will have years of the software engineer space shrinking. There'll be very little entry-level hiring. Layoffs will reduce the entire workforce and there'll be a few tech leaders who hang on because they still produce value. Some of them will probably discover they don't produce enough value and they'll go too.
But where do the future tech leaders come from in this scenario? AI is being used as an excuse to kill the entry-level pipeline and if you go around and say "git gud" [sic] then I'm sorry but you just don't understand the impact of what's happening or you don't care because, at least for now, you're simply not affected.
You see the same thing with people who espouse the myth of meritocracy. Well, if a given workforce shrinks by 50%, half those people are, by definition, not going to survive. An individual may be about to reskill or skill up to survive but not everyone can. And that's how people end up in Amazon warehouses. At least until they're no longer needed there ether.
If the industry is to shrink this is the best way it can. Stop people entering while they are young and can pivot into something with better returns. Keep the experienced people who are older and may find it harder to pivot and had some "good days" to help them ride them through these bad times. I've seen similar dynamics in other industries as they slowly die/move on (e.g. manufacturing, niche trades, etc). A slow decline is better than a boom/bust. If it ends up that we need software engineers later training is an easier problem than mid career death for the juniors in a few years time.
Eventually the market finds a new equilibrium of staff to demand ratio. You prefer that happen sooner so people don't make bad investments of their time (e.g. studying the wrong course based on inaccurate market signals).
I normally say that I have zero concerns regarding AI in terms of employment. At most I am concerned in learning the best practices on AI usage to stay on top of things.
It's ability to write code is alright. Sometimes it impresses me, sometimes it leaves me underwhelmed. It certainly can't be left to do things autonomously if you are responsible for its output.
Moderately useful tool, but hellishly expensive when not being subsidized by imbeciles that dream of it undrrmining labor. A fool and his money should be separated anyway.
What I am really concerned about the incoming economic disaster being brewed. I suspect things will get very ugly pretty soon.
This is why I have I have started planning to transition away from Gmail for all domains I manage. Gmail doesn't actually get any better as a product - just more annoying as they try to upsell me on crap I don't want or need. It gets a bit more shitty every year.
The sheer size of Gmail means I have zero chance for support even though I pay for a service. The risk is too great to be acceptable.
Why are we, in 2026, still talking about ipv6? It is time to give it up and start over. Yes, it is unlikely we can agree on an ipv4 successor. But at this point we should be able to agree ipv6 is not going to be it.
What do you think should be done instead? If ipv4 but with longer addresses (which is called ipv6) is not to your satisfaction, what would be? You want to completely overhaul the internet like in ipv8 and you think ipv8 doesn't have the exact same problems and thousands more just because it's never been deployed and nobody's encountered them yet?
IPv6 is over 30 years old. The world has not embraced it. It isn’t going to. It is time to call it and try to figure out something that will work.
You do understand that one need not have a better idea to make that observation? The observation is self-evident and can exist in the world regardless of what we think about IPv6 or the feasibility of figuring out something people will want to use.
Getting defensive and playing “then you come up with something better”-games is not getting us anywhere. It is just part of the problem. It is unproductive.
That would indeed be a disaster because there is a lot of IPv6 usage and a lot more support from operating systems and hardware that has to exist first and now we should start again with something else would introduce a 3rd unsuccessful attempt with ....what benefits?
So now please get to step 2 of the argument: why do you think a world that did not embrace IPv6 (a false premise by the way, as 50% of internet traffic is IPv6) would embrace IPvBBORUD?
Again, you are not getting the argument. IPv6 is dead. It is better go give up and see what else can be done. Stop and think about what I'm saying before you react.
Are you suggesting we spend another 30 years beating a dead horse?
Java bent a lot to accommodate. There are lambdas and records which violate the old OOP rules, and now virtual threads which were sorely missing for things like web backends.
I have been keeping an eye on the outages. This is why I am looking more deeply into what I can do with self-hosted models. When I see people who want to build products on top of these services I can't help but think that people are mad. We're still a long way from these services being anywhere near stable enough for use in a product you'd want to sell someone.
Did they ever work? No, seriously. I've had a couple of them and the few times I really could have used them I discovered that they represented the worst backup solution I've ever had the misfortune to deal with. Slow, very hard to use beyond their primary integration with the OS (which isn't good to begin with), there's really no good way to keep an eye on how they are doing (what's actually backed up, if it is still there) and the performance is worse than any hand rolled solution I've ever used.
They never supported it properly in the first place and then it just meh'ed out of existence.
I hope "the new Apple" is going to take software seriously.
35 years in the tech industry has taught me one thing: incumbents that have been around for a long time are almost always more clueless and more full of shit than you think, what they do isn't as hard as they claim and you can probably do better given a fraction of the time they spent just because you don't have legacy systems to worry about and because technology and tooling has moved on.
Incumbents thrive on the myths about what they do being hard and impossible to replicate.
Yes, it is a lot of work to replace what you can get off the shelf today. But it isn't like the basic tech itself is all that hard to replicate step by step if you accept that it takes time and the first N development stages will give you something that isn't as feature rich and polished. And if one makes it open source, interoperability will be easier to do something about.
Perhaps some of the analysis tools/services you can buy today will be hard to replicate, but I doubt they are that hard to replicate. And it is worth having slightly suboptimal results for a couple of seasons than being on the receiving end of a hostage-situation.
But yes, it is certainly a huge effort to get what you actually need.
The Pareto principle applies. For highly complex systems it’s easy to build most of what the incumbents have. It’s the last 20% where it is hard to catch up just because the incumbents have decades of a head start and have the momentum. And even more so here because it’s not just software. It’s very science and hardware heavy.
For farming, it’s even more tough because the market has a really uneven distribution. Usually the best place to tackle huge incumbents is in the midmarket. They’re big enough to need your automation, but they’re small enough to take a risk to save some money, and the features you haven’t built yet aren’t blockers for them.
But there’s basically no midmarket farming, all farms are pretty much either really big or really small.
I would be surprised if this doesn't exist already in some nascent form?
This is an area where you would probably need entire ecosystem of systems that are onboard tractors, but also for the various implements you hook up to it to monitor sowing, fertilizing, spraying etc. Including backend systems that you can either self-host or subscribe to some service that doesn't have awful terms.
It shouldn't take immense amount of capital to make some real progress towards something that can make a difference.
Now that everyone seems to be discovering Hetzner I guess the countdown clock for enshittification has started ticking, so we have to start looking for the next place to escape.
Agentic coding changed that. A bit.
I still dislike most functional languages because my brain doesn't work with their syntax, but these languages are REALLY good targets for agentic coding.
I'm a backend developer who occasionally needs a frontend slapped onto something. So I have been through all the usual suspects. Angular, React, Vue. All terrible reminders of why I try to stay away from the frontend. Touch it and you roll around in tons of dysfunctional tooling, weird complexity and gimmicky mechanisms that are ridiculously fragile. It isn't just as if a bunch of cats wrote the code, but they are feral cats. And if you point out just how messy things are, they just hiss at you and piss on your shoes.
And then I discovered Elm. Not only does it not crap all over my git repository, LLMs love Elm. Yes, it poops out a JS blob. But I don't have to look at it. I can just pick it up with my long tongs and drop it into my server using embed.FS in Go.
Perhaps I should overcome my peculiarities and love Elm too.
Anyway.
Anything that can make Python go away I'm for. It is not for writing programs that will ever leave your workstation and be inflicted on others.
reply