You don’t need to write code by hand to learn from iterations and experiments. I run more experiments and try out more different solutions than I ever could before, and that leads to better decisions. I still read all the code that gets shipped, and don’t want to give that up, but the idea that all craft and learning is lost when you don’t is a bit silly. The craft/learning just moves.
Imo the biggest issue with these no-code architects has been that you could become one without ever having coded at any noteworthy level of skill (which meant most of them were like this).
In my experience, in a lot of organizations, a lot of people either lacked the ability or the willingness to achieve any level of technical competence.
Many of these people played the management game, and even if they started out as devs (very mediocre ones at best), they quickly transitioned out from the trenches and started producing vague technical guidance that usually did nothing to address the problems at hand, but could be endlessly recycled to any scenario.
The entire mistake you are making is comparing using AI to skimming textbooks, or taking shortcuts. Your entire premise is wrong.
People who care about craft will care about the quality of what they produce whether they use AI or not.
The code I ship now is better tested and better thought through now than before I used AI because I can do a lot more. That extra time goes into additional experiments, jumping down more rabbit holes, and trying out ideas I previously couldn’t due to time constraints. It’s freeing to be able to spend more time to improve quality because the ROI on time spent experimenting has gone up dramatically.
a) I cannot effectively review more than 2000 lines of code a day. The LLMs can produce much more than that.
b) Even if I accepted my reading throughput limitations as the cost of being in the loop, reading is not enough to keep cognitive debt in check: my skills will atrophy if I do not participate in the writing ("What I cannot create I cannot understand").
So, to me, it seems like we, humans, either have to come up with higher (and deterministic) abstractions than code to communicate with LLMs or resign ourselves to letting the LLM guess what we want from English and then banging on the output to see if it sort of works. This later state of affairs seems to be what the current trend is and I find that absolutely revolting.
I think the distinction is that for experiments and prototypes the behaviour of the final system is what we are trying to design. We can experiment and see the tradeoffs and explore the design space before committing to a direction. And then we can sit down and produce the final code to a quality we are happy with. If you are serious about this process, there is no way you are producing 1000s of lines of code a day, unless it is trivial boilerplate.
In terms of higher-level abstractions, I agree this is one particularly treacherous rung on the ladder of abstractions. Previous abstractions like compilers or garbage collectors have at least had more structure/rules to rely upon. I don't know exactly how that will look but I don't think we will solely be relying on banging on the output, we will also be spot-checking the source code, using profilers or other tools to inspect the behaviour of systems, and asking the agent to explain the architectural decisions made. I'm not sure exactly how this will look, but I do believe that people who care will still find ways to do good work.
My agentic workflow probably differs somewhat from the majority of others here, but I can positively guarantee you that both the quality and quantity of my output is significantly higher than it has ever been, in my 20-something years of writing code. And at least 90÷ of the code I've written this year was output by an LLM. You can keep sticking your head in the sand, in the end it will only be to your own detriment.
Well you have obviously already made up your mind, so have fun with your confirmation bias. We'll all be over here having a good time, getting more work done. Feel free to come over when you put down your grudge.
This is an unpopular take, but when I was in undergrad maths in an old-school two-semester courses with one exam (exercises + oral) to cover it at the end, I was able to get to 60-80% score on exercises when I did just theory as prep.
I couldn't get exercises done where there were tricks/shortcuts which are learned by doing a lot of exercises, but for many, these are still the same tricks/shortcuts used in proofs.
This was indeed rare among students, but let's not discount that there are people who _can_ learn from well systemized material and then apply that in practice. Everyone does this to an extent or everyone would have to learn from the basics.
The problem with SW design is that it is not well systemized, and we still have at least two strong opposing currents (agile/iterative vs waterfall/pre-designed).
It still surprises me how effective the /simplify skill is.
I’ve also had some great results with a /reflect skill that asks the agent to look at the work in the broader context of the project. But those are the only two skills I use regularly that aren’t specific to our company, codebase, or tools.
Some engineers I work with have had less than desirable results with /simplify but it overall seems to work! I used to use some of the humanlayer subagents but they haven't been updated in several months
> I'd say that by purging stuff from the brain we are losing thinking itself
The idea that there will be less to think about seems a bit short-sighted. Humans are very good at moving to higher levels of abstraction, often with more complexity to deal with, not less.
You can’t make up your mind about a model by using it on one task. Especially to say it’s such a bad downgrade after that is ludicrous. I’ve had great experiences with it this morning.
I also had Opus 4.7 and Opus 4.6 do audits of a very long document using identical prompts. I then had Codex 5.4 compare the audits. Codex found that 4.6 did a far better job and 4.7 had missed things and added spurious information.
I then asked a new session of Opus 4.7 if it agreed or disagreed with the Codex audit and it agreed with it.
People will accept it as a way to build good software.
Many are still in denial that you can do work that is as good as before, quicker, using coding agents. A lot of people think there has to be some catch, but there really doesn’t have to be. If you continue to put effort in, reviewing results, caring about testing and architecture, working to understand your codebase, then you can do better work. You can think through more edge cases, run more experiments, and iterate faster to a better end result.
I think the anti-AI stance has been reversing on HN as tooling improves and people try it. It’s only been a little over a year since Claude Code was released, and 3 or 4 months since the models got really capable. People need time to adjust, even if I would expect devs to be more up-to-date than most.
People’s willingness to argue about technology they’ve barely used is always bewildering to me though.
Generally I think this happens when people don’t monitor for errors on a regular basis. People only notice if things are actively broken for customers, and tons of small non-fatal bugs slip through and build up over time.
It is not just startups or small companies embracing agentic engineering… Stripe published blog posts about their autonomous coding agents. Amazon is blowing up production because they gave their agents access to prod. Google and Microsoft develop their own agentic engineering tools. It’s not just tech companies either, massive companies are frequently announcing their partnerships with OpenAI or Anthropic.
You can’t just pretend it’s startups doing all the agentic engineering. They’re just the ones pushing the boundaries on best practices the most aggressively.
That is why a fully automated firm would be a paradigm shift. Instead of requiring someone to be responsible and to QA things, you just let AI systems be responsible internally, and the company responsible as a whole for legal concerns.
This idea of an automated firm relies on the premise that AI will become more capable and reliable than people.
In this regard, the company cannot be created where there is not a single person tied to it, at least legally, even shell corporations have a person on the record as being responsible. So there needs to be some human that is apart of it, and in any "normal" organization if there is a person tied to the outcome of the company they presumably care about it and if the AI 99.99% of the time does good work, but still can make mistakes, a person will still be checking off on all its work. Which leads to a system of people reviewing and signing off on work, not exactly a fully autonomous firm.
reply