If by software engineering, one means typing code character by character into a text editor, sure it's going to be difficult to find someone to pay you for it.
If you mean creating software, well we are creating more software than ever before and the definition of what software is has never been so diverse. I can see many different careers branching off from here.
We are experiencing what Civil Engineers experienced going from slide rules to calculators. Or electrical engineers going from manual circuit path drawing to CAD tools.
The interesting thing to me is that, Software Engineering will have to evolve. Processes and tools will have to evolve, as they had evolved through the years.
When I was finishing university in 2004, we learned about the "crisis of software " time, the Cascade development process and how new "iterative methods" were starting.
We learned about how spaghetti code gave way to Pascal/C structured prpgramming, which gave way to OOP.
Engineering methods also evolved, with UML being one infamous language, but also formal methods such as Z language for formal verification; or ABC or cyclomatic complexity measurements of software complexity.
Which brings me to Today: Now that computers are writing MOST of the code; the value of current languages and software dev processes is decreasing. Programming Languages are made for people (otherwise we would continue writing in Assembler). So now we have to change the abstractions we use to communicate our intent to the computers, and to verify that the final instructions are doing what we wanted.
I'm very interested to see these new abstractions. I even believe that, given that all the small details of coding will be fully automated, MAYBE we will finally see more Engineering (real engineering) Rigor in the Software Engineering profession. Even if there still will be coders, the same way there are non-engineers building and modifying houses (common in Mexico at least)
Calculators and CAD tools do not give you non-deterministic answers. Both of them simply automate part of the manual work for them without creating anything "new". I haven't used CAD tools but I did use some level editors such as Trenchbroom -- I think what is automated is the 3d shapes that you want to make -- e.g. back in the day of '96, when ID Software is creating Quake, there was very little pre-drawn shapes in the level editor and they have to make the blocks by themselves, thus it is very difficult and time consuming to make complex shapes such as curved walls and tunnels. Then better tools were invented and now it is much easier to create a complex shape. But you don't type "a Quake level with theme A, and blah blah" and then you get a more or less working level -- this is what AI is doing right now.
I think the right analogy to calculators and CAD tools, is IDE with Intellisense for SWE -- instead of typing code one char by one char, we can tab to automate some part of it.
But I agree with your consensus -- SWE is changing, whether we like it or not. We need to adapt, or find a niche and grit to retirement.
It doesn't make sense to get hung up on this aspect of LLMs. We prefer non deterministic so far because it tends to work slightly better even if it is completely possible to ask for a temperature=0 deterministic answer.
With more scale and research, at some point you'll get results that are both useful and deterministic, if it's not already the case.
It absolute makes sense to get "hung" up on something when it comes to planning society around it JFC. I'm with the other commentator, your understanding of these tools should be taken into question since you seem to be reading the tea leaves of statistical noise.
In 2020, there are two companies that are competitors with each other. They each employ 100 programmers to do their job, and we all know how those organizations operated; perpetually behind, each feature added generating yet more possible future features, we've all lived it and are still largely living it today.
In 2026, both companies decide that AI can accelerate their developers by a factor of 10x. I'm not asserting that's reality, it's just a nice round number.
Company 1 fires 90 of their programmers and does the same work with 10.
Company 2 keeps all their programmers and does ten times the work they used to do, and maybe ends up hiring more.
Who wins in the market?
Of course the answer is "it depends" because it always is but I would say the winning space for Company 1 is substantially smaller than Company 2. They need a very precise combination of market circumstances. One that is not so precise that it doesn't exist, but it's a risky bet that you're in one of the exceptions.
In the time when the acceleration is occurring and we haven't settled in to the new reality yet the Company 1 answer seems superficially appealing to the bean counters, but it only takes one defector in a given market to go with Company 2's solution to force the entire rest of their industry to follow suit to compete properly.
The value generation by one programmer that can be possibly captured by that programmer's salary is probably not going down in the medium and long term either.
Your hypothetical ignores the distribution of programmer talent. Company 1 can pay more per person and hire 10x programmers, who can then leverage AI to produce the same or more as Company 2.
We have seen this in other knowledge industries. U.S. legal sector job count is about the same today as it was 20 years ago. But billing rates have exploded and revenues in the 200 largest firms have increased more than 50% after adjusting for inflation. Higher-end law firms have leveraged technology to be able to service much more of the demand and push out smaller regional competitors.
I think paying significantly more was a very localized thing that happened for AI researchers who were familiar with the alchemy that made GPT4 suddenly work much better than anything else seen before.
Of course it does. It ignores a lot of things. Mostly I just want to present the view that things aren't entirely hopeless and the entire industry is doomed to contract by 90% because of AI. Your legal system point also fits in precisely with what I'm trying to convey, just in a different direction.
The problem with this idea of feature backlog, at least everywhere I've worked, is what you really have is an idea backlog. You have some things people want to build, and maybe a business analyst or product owner has done a first pass. But they're far from the rubber-meets-the-road part where someone needs to write out a detailed spec based on the exact current state of the app/system, and the developers start asking questions as they run into unspecified edge cases--all of which usually means sitting down with the client again over a series of meetings.
AI coding agents help speed up none of that. Meanwhile the developers are either sitting in meetings or working on something else while the product owners hash it out with the client.
And sometimes, after all that, you realize the client can get 95% of what they're asking for if you just tweak some existing feature. Everyone's mostly happy, the app stays less complex.
My concern would be whether creating that software pays enough to keep up with skyrocketing costs of living. In the past, the jobs created by automation have generally been lower paid with less autonomy.
This problem is not a software engineering problem nor an AI problem but a problem of the balance of power between working hard vs. investing. If the people who believe in working hard organize and slow down the tendency to rig everything for investors, then the markets should stabilize at a more generally prosperous place.
The balance of power is dictated by economic facts, not by organizing or politics. Auto workers in 1950 weren't better organized than auto workers in 2026. They just had more leverage because they weren't competing with auto workers in China. Likewise, Silicon Valley isn't paying people writing web apps $$$ because those workers are organized. They are doing it because they don't have a feasible alternative. If AI enables them to do more with less, they'll take that option.
Are you saying that auto labor unions can't go into other places and organize the people there for higher salaries and less wretched conditions? Or that organizing doesn't increase the power of the hard working people? Economic facts are partially determined by human factors such as the supply curve for labor and information flow amongst sellers or buyers for various goods.
Ignoring the preference of people generally wanting to live in HCOL areas, this only works if every company hires equally from LCOL areas. One of the benefits of living in a HCOL area is access to the job market it provides. It's much easier to get hired for a software position living in San Francisco than it is living in Deming, New Mexico.
More importantly, in San Francisco, there are a lot more opportunities than in coming. I've never been to either city (i'm not going to come to the conference I was at because I never left the hotel.) however, I can still tell you confidently that if you have a weird hobby, you're much lower than you can find other people with that interest, stores that sell the things you need to complete the hobby, and all those other things in life that you want. If you love doing the types of things people in Deming do, well, it's a great life, I'm sure. However, as soon as you want to do something off the wall, well, you may not even find enough people in Deming to have your cricket team, while I have no doubt that San Francisco has a team that you could join.
but moving to a lower COL area can reduce that amount of public and private services one gets access to, no? network connectivity will, for example, likely be worse out in the sticks
Creating more software does not solve anything if that software is mostly a functional duplicate of other software. Or, in other words, all companies re-invent the wheel many times over. It doesn't matter if you 10x the development of software that brings nothing new besides being written in a shiny new framework.
We should, IMHO, start getting rid of most software. Go back to basics: what do you need, make that better, make it complete. Finish a piece of software for once.
In a world where software programming/architecting is solved by AI, value will accrue to people with expertise in other domains (who have now been granted the power of 1000 expert developers), not the people whose skillsets have been made redundant by better, faster and cheaper AI tools.
It could go either way. Don't forget that LLMs also have expertise in the other domains. Who would do better - the chemist with vibe coded app or the developer with vibe coded chemistry?
My premise is that a vibe-coded app will be indistinguishable from a ‘hand-crafted’ one. So in that scenario the chemist wins, because the developer has no value to add.
It is clear to me that SWE and ML research will be subsumed before other domains because labs are focusing their efforts there, in their quest to build self-improving systems.
There will be more software in the same way there is more agricultural output today.
The idea that productivity gains which result in more of something being produced also create more demand for labour to produce that thing is more often wrong that true as far as I can tell. In fact, it's quite hard to point to any historical examples of this happening. In general labour demand significantly decreases when productivity significantly increases and typically people need to retrain.
Except we are now in the golden age where people with 20 or 30 years of experience know what quality software is - or at least what it is not. So they are able to steer the LLMs. Once this knowledge is gone - the quality could go downhill.
I think so, Macs can run software written for Android, iOS, Mac, Windows and Linux, everything else is incapable of running the iOS and Mac stuff. Virtualizing macOS from a Linux or Windows sucks for arbitrary reasons, and both macOS and iOS are missing a compatibility shim like WINE.
All this sounds great in theory, but Mac does not have a particularly stable ABI and it's fairly common for closed source software from 5+ years ago to just not run.
Yes, the company that explicitly closes its ecosystem can also run the more open ecosystems legally, and those open ecosystems can't legally run the closed one.
That's a knock against Apple, not a bragging point.
I've come to believe Apple's way is probably the only sustainable way.
I'm a Windows user who also develops for Windows desktop and it's kind of sad that even though Windows has a way larger desktop share, there just isn't much going on compared to MacOS. Every week I read about some cool new or updated MacOS application and I can't remember the last time I read something similar about a Windows application (other than games).
The only reason I can think of is that MacOS developers are more motivated at least in part by having users that are willing to support developers by paying for software.
In turn, users and developers willing to pay for computing motivates and enables Apple to make better hardware. They don't always get it right, but I think they are doing a better job than most companies. It's also the reason why I think Apple's recent push for services revenue is so dangerous. The incentives aren't as aligned with users.
Maybe next year Framework or System76 will come out with their answer to Apple's M-series chips and I'll have to re-evaluate, but right now it feels like it's Apple against everybody else and everybody else is racing to the bottom.
Whether you consider it magic is up to you, but, unlike a destructor in RAII, there is nothing automatic going on. If you don't explicitly invoke a destructor, you won't get a destructor.
The fact that you can explicitly invoke the destructor to happen later is simply syntactic sugar, just like if/else/while, or any other control construct more powerful than a conditional jump instruction.
And more importantly, you can choose what destructor to call. This is perhaps what's most underrated about defer, because defer can select among many different destructors possible, at multiple different levels (group free with arenas, individual free, etc).
Or even whether you need a destructor, or something simpler, like nulling out a pointer or two to break a reference loop.
defer is a perfectly general structured flow concept; it only cares about when you do something, and is completely orthogonal to what you need to accomplish.
> When you explicitly invoke a "destructor", you do it on many code paths (and miss one or two)
Unless, of course, you do it inside a defer block.
> You don't specify where the `defer`-red "destructor" will be invoked.
Yes, actually, you do. It is patently obvious, by code inspection, where the destructor, or anything else specified in a deferred block, will be invoked. defer is a perfectly cromulent part of structured control flow, allowing for easy reasoning about when things occur without having to calculate an insane number of permutations of conditional branch instructions.
But at least the experimental results disproving these incremental fixes should be exactly the kind of thing the next Einstein should need for coming up with an entirely new way of looking at things
> Some once-in-a-generation scientist has an intuition that turns out to be true and mathematically elegant.
That’s a bit simplistic. There was a lot of research activity around the aether in the late 19th century that was ultimately useful for the foundation of special relativity. Like the Michelson-Morley experiment, which was supposed to measure aether winds, but showed it did not actually seem to exist. Lorentz developed his transform as a theory of the aether, but it became a cornerstone of special relativity. Einstein based his ideas on a lot of things that were done a couple of decades before, some of which were supposed to be part of a theory of the aether.
There was actually quite a lot of activity and vigorous debate between aether and something else, unknown at the time, that turned out to be relativity. Einstein did not just show up and invent everything. There is a very quick overview of this here: https://en.wikipedia.org/wiki/History_of_special_relativity
If you revisit what I wrote, you'll find that I agree with you.
There was vigorous debate about a wrong and an unknown thing (both was basically wrong) and it took Einstein's intuition and the new GR math to turn it into science.
I was drawing a parallel between this and the current MOND, string theory, dark matter debate. More specifically, I'd even say dark anything is our generations aether!
Dude, if you genuinely want to know what happened, you should read some proper history of science. Here take this: https://arxiv.org/pdf/physics/0405066
It shows both how Einstein very much didn't make the theory alone, was inspired to take impotat technical steps by work of other thats that created a theory based on his principles before him, and that actually he first created (in intense collaboration) a failed theory that got Mercury's anomaly all sorts of wrong.
you think the deepest mysteries of reality and the universe should just reveal themselves because we have a couple thousand smart people working on it for... 10 years?
Regulation won't magically save low margin businesses.
Nationalizing might, but then you make it difficult to compete for others. And of course, there's plenty of precedence of nationalized airlines failing catastrophically and having to be sold off to private or foreign entities to keep functioning.
If you mean creating software, well we are creating more software than ever before and the definition of what software is has never been so diverse. I can see many different careers branching off from here.
reply