Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This was a real issue, and Anthropic recently awknowledged it:

https://www.anthropic.com/engineering/april-23-postmortem

Of course, it sucks when companies screw up ... but at the same time, they "paid everyone back" by removing limits for awhile, and (more importantly to me) they were transparent about the whole thing.

I have a hard time seeing any other major AI provider being this transparent, so while I'm annoyed at Claude ... I respect how they handled it.

 help



Amusingly, when a coworker was looking for this postmortem, they found a different postmortem of three Claude issues that caused decay. This one was in the platform, not in Claude Code:

https://www.anthropic.com/engineering/a-postmortem-of-three-...

I think there's a certain amount of running with scissors going on here. I appreciate the transparency, but the time to remediation here seems pretty long compared to the rate of new features.


Yes that was one issue. It’s not the general degradation I have been talking about though, which is ongoing.

I recall reading similar tales of woe with other providers here on HN. I think the gradual dialling back of capability as capacity becomes strained as users pile on is part of the MO of all the big AI companies.


the 'general degradation' is a myth. Check out https://isitnerfed.org/.

Random crowd anecdata is still anecdata.

you're not wrong, but anecdata is not data. Here's some more data: https://marginlab.ai/trackers/claude-code-historical-perform...



Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: