Start vibe-coding -> the model does wonders -> the codebase grows with low code quality -> the spaghetti code builds up to the point where the model stops working -> attempts to fix the codebase with AI actually make it worse -> complain online "model is nerfed"
I remember there was a guy that had three(!) Claude Max subscriptions, and said he was reducing his subscriptions to one because of some superfluous problem. I'm thinking, nah, you are clearly already addicted to the LLM slot machine, and I doubt you will be able to code independently from agent use at this point. Antropic, has already won in your case.
I don’t really understand the slot machine, addiction, dopamine meme with LLM coding. Yeah it’s nice when a tool saves you time. Are people addicted to CNCs, table saws, and 3D printers?
I don't use the agentic workflow (as I am using it for my own personal projects), but if you have ever used it, there is this rush when it solves a problem that you have been struggling with for some time, especially if it gives a solution in an approach you never even considered that it has baked in its knowledge base. It's like an "Eureka" moment. Of course, as you use it more and more, you start to get better at recognizing "Eureka" moments and hallucinations, but I can definitely see how some people keep chasing that rush/feeling you get when it uses 5 minutes to solve a problem that would have taken you ages to do (if at all).
Also, another difference is the stochastic nature of the LLMs. With table saws, CNC machines, and modern 3D printers, you kind of know what you are getting out. With LLMs, there is a whole chance aspect; sometimes, what it spits out is plainly incorrect, sometimes, it is exactly what you are thinking, but when you hit the jackpot, and get the nugget of info that elegantly solves the problem, you get the rush. Then, you start the whole bikeshedding of your prompt/models/parameters to try and hit the jackpot again.
It is the rush of "wow it solved this." I should take a break and work on something else, but in the back of my mind "what else can it solve?" Then I come up with extra work and sometimes lose at the LLM casino.
The addiction research has terms like LDWs and near-misses. It is massively researched topic. Even cursory reading helps to understand why table saw makes really bad slot machine. Really bad 3d printer? Maaaybe. But LLMs are, either by intelligent design or coincidence of worst outcomes, excellent slot machines! They almost succeed, produce small payouts, create suspense and anticipation, and their operation is unpredictable. Table saws have a long way to go
I have unfortunately found myself doing stuff like this too, although maybe not as egregious.
I think part of the problem is that our brains are wired to look for the path of least resistance, and so shoving everything into an LLM prompt becomes an easy escape hatch. I'm trying to combat this myself, but finding it not trivial, to be honest. All these tools are kind of just making me lazier week over week.
There’s some kind of new failure mode here. People seem to determine a tool’s applicability for a task by whether its interface allows for their request to be entered. An open ended natural language input field lets people enter any request, regardless of the underlying tool’s suitability.
Not sure what CNCs are but table saws and 3d printers still require thinking, planning, guiding by the operator.
I know I know you're going to say (or simonw will) that effective and responsible use of LLM coding agents also requires those things, but in the real world that just isn't what's happening.
I am witnessing first hand people on my team pasting in a jira story, pressing the button and hoping for the best. And since it does sometimes do a somewhat decent job, they are addicted.
I literally heard my team lead say to someone "just use copilot so you don't have to use your brain". He's got all the tools- windsurf, antigravity, codex, copilot- just keeps firing off vibe coded pull requests.
Our manager has AI psychosis, says the teams that keep their jobs will be the ones that move fastest using AI, doesn't matter what mess the code base ends up in because those fast moving teams get to move on to other projects while the loser slow teams inherit and maintain the mess.
The dopamine rush to fix the issue super quickly, close the ticket, slack / work more?
Absolutely, not understanding why you even ask. Humans are creatures of habits that often dip a bit or more into outright addictions, in one of its many forms.
It's fun and you do get a dopamine rush when LLM does something cool for you. I'm certainly feeling it as a user. Perhaps you can get the same from other tools. I would vote for yes- addictive.
I don't think there are good analogies to physical tools. It would be something like a nondeterministic version of a replicator from Star Trek which to me would feel much closer to a slot machine than a CNC mill.
does your table saw build you a bookshelf by itself? and then you build other things and get confident in it and say: ok build me a house and it tries but then the house falls over?
Part of me wonders if there's some subtle behavioral change with it too. Early on we're distrusting of a model and so we're blown away, we were giving it more details to compensate for assumed inability, but the model outperformed our expectations. Weeks later we're more aligned with its capabilities and so we become lazy. The model is very good, why do we have to put in as much work to provide specifics, specs, ACs, etc. So then of course the quality slides because we assumed it's capabilities somehow absolved the need for the same detailed guardrails (spec, ACs, etc) for the LLM.
This scenario obviously does not apply to folks who run their own benches with the same inputs between models. I'm just discussing a possible and unintentional human behavioral bias.
Even if this isn't the root cause, humans are really bad at perceiving reality. Like, really really bad. LLMs are also really difficult to objectively measure. I'm sure the coupling of these two facts play a part, possibly significant, in our perception of LLM quality over time.
Still I don't previously remember Claude constantly trying to stop conversations or work, as in "something is too much to do", "that's enough for this session, let's leave rest to tomorrow", "goodbye", etc. It's almost impossible to get it do refactoring or anything like that, it's always "too massive", etc.
I keep reading about this, but I have never, ever seen it. Daily Claude Max user for ~6 months. Not saying it doesn’t happen, but it’s never once happened to me.
This really is a lot of it, at least trying to help people at work internally. I've discovered a lot of people overly rely on Claude writing directives (always do X, never do Y, remember this every time) to its MEMORY.md, which it does mostly unprompted. The problem is, the few times I've noticed my agent getting "squirrely," some or a lot of the stuff in MEMORY.md was flat out wrong (the agent wrote down the wrong memory), confused or in direct contradiction with its CLAUDE.md, etc.
When I fixed this, it was like magic, working how I wanted again. I now have a skill to periodically audit MEMORY.md and CLAUDE.md according to the conventions I've learned work best for me - which I suppose /dream is supposed to handle eventually, but you're kind of trusting it to audit its own memories, which have, at least to me, already proven to be unreliable.
With so many factors like this, not even to mention context exhaustion, window size, effort, etc. - anecdotal evidence is almost worthless without examining someone's entire local state.
A lot of it, to me, feels like user error, I haven't really noticed much behavioral difference between 4.5, 4.6, and 4.7, at least in my own workflow. I will note though that constantly managing these things is a lot of work that I hope one day becomes less necessary. It's more than I can expect people on my team to manage on their own, and unless I sit down with them 1 on 1 and review their issues, or write some clever agent to help them, I don't really know how I can help people reporting things that I hear posted here a lot.
Even superpowers started dividing things into "phases".
"I think we can postpone this to phase 2 and start with the basics".
Meanwhile using more tokens to make a silly plan to divide tasks among those phases, complicated analysis of dependency chains, deliverables, all that jazz. All unprompted.
100% agree, and I experienced that behaviour first hand. I got confident, started giving less guidelines, and suddenly two weeks have passed and the LLM put me into a state of horrible code that looks good superficially because I trusted it too much.
Nah dude, that roulette wheel is 100% rigged. From top to bottom. No doubt about that. If you think they are playing fair you are either brand new to this industry, or a masochist.
Its because llm companies are literally building quasi slot machines, their UI interfaces support this notion, for instance you can run a multiplier on your output x3,x4,5, Like a slot machine. Brain fried llm users are behaving like gamblers more and more everyday (its working). They have all sorts of theories why one model is better than another, like a gambler does about a certain blackjack table or slot machine, it makes sense in their head but makes no sense on paper.
Don't use these technologies if you can't recognize this, like a person shouldn't gamble unless they understand concretely the house has a statistical edge and you will lose if you play long enough. You will lose if you play with llms long enough too, they are also statistical machines like casino games.
This stuff is bad for your brain for a lot of people, if not all.
100% agree with this take. As I find myself using AI to write software, it is looking like gambling. And it isn't helping stimulate my brain in ways that actually writing code does. I feel like my brain is starting to atrophy. I learn so much by coding things myself, and everything I learn makes me stronger. That doesn't happen with AI. Sure I skim through what the AI produced, but not enough to really learn from it. And the next time I need to do something similar, the AI will be doing it anyway. I'm not sure I like this rabbit hole we're all going down. I suspect it doesn't lead to good things.
It a terrifying path we're taking, everyone's competency is going to be 1:1 correlated to the quality and quantity of tokens they can afford (or be loaned).. I prefer to build by hand, I also don't think its that much slower to do by hand, and much rewarding... Sure you can be faster if you're building slop landing pages for your hypothetical SaaS you'll never finish but why would I want to build those things.
It's not slower to do by hand. I race the AI all the time. I give it a simple task to write a small script that I need to complete a task that is blocking me... and the "thinking" thing spins and spins. So I often just fire up a code editor and write it myself, often before the AI is actually done after I have to cajole it through 10 iterations to get what I want. And when I race it, I get what I want every time, and often in the same or less time than it takes the AI (plus the time that I have to spend cajoling it).
I agree with the notion, except that the models are indeed different
Some day maybe they will converge into approximately the same thing but then training will stop making economic sense (why spend millions to have ~the same thing?)
I normally agree with this, but they objectively did lower the default effort level, and this caused people to get worse performance unexpectedly.
And it does seem likely to me that there were intermittent bugs in adaptive reasoning, based on posts here by Boris.
So all told, in this case it seems correct to say that Opus has been very flaky in its reasoning performance.
I think both of these changes were good faith and in isolation reasonable, ie most users don’t need high effort reasoning. But for the users that do need high effort, they really notice the difference.
Both can be true at the same time. There's no(t enough) transparency about this.
Though I reckon even if the HN crowd is a loud minority Anthropic has no problem with traction, and even if eventually it will the enterprise market doesn't care much about HN threads.
Good to remind this. But I also don't want to go back to pre-llm. Some dev activities are just too painful and boring, like correctly writing s3 policies. We must have discipline to decide what is worth our attention and what we should automate, because there is only so much mind energy we can spend each day.
I mean they literally said on their own end that adaptive thinking isn't working as it should. They rolled it out silently, enabled by default, and haven't rolled it back.
Sorry but this is a ridiculous comment. It's not magic. There are countless levers that can be changed and ARE changed to affect quality and cost, and it's known that compute is scarce.
The roulette wheel isn't rigged, sometimes you're just unlucky. Try another spin, maybe you'll do better. Or just write your own code.