I heard a talk from a VP at NVIDIA a couple of months ago and he echoed this. Essentially their policy is "you are still fully responsible for the code you ship, whether AI helps with it or not"
this is a good policy, as long as the productivity expectations match it. The problem happens when you combine "you're responsible for what you ship" with "you need to be 100x faster"
Depends on what is meant by "fully responsible" I guess. At my company non-engineers push code to production where the only reviewer is frequently an LLM, if the code is broken they get an LLM to fix it. The human does not understand the code, they are not even trying to, this is pure vibe coding. We also have engineers who push code to production that they have not written, and not fully read, and has not been read by another human (at least not in detail).
I would say that counts as "not having that policy". Based on what management tells us, we are dead if we don't operate this way.
I think being responsible for the code is a better framing. I run a saas and I don’t always review all the code, but this thing supports my family, so I am acutely aware that I’m responsible for what it does. My customers aren’t going to let me blame the agent for fucking up their workflows.
But that still doesn’t mean I review all the code. I tend to review defensively, based on the potential for harm if this piece of code is broken. And I rely a lot on tests, static analysis, canaries, analytics, health checks, etc. to reduce risk for when I’m wrong. So far it’s working.
Libraries that automatically throw errors for status codes in the 400 and 500 ranges are pretty obnoxious (looking at you, axios). It adds unnecessary overhead, complexity, and bad ergonomics by hijacking control flow from the app.
Responses with status codes in the 400 range are client errors, so the client shouldn't retry the same request. So a 404 is appropriate despite how annoying a library might be at handling it. Depending on which language/ecosystem you are using, there are likely more sane alternatives.
Completely agree on the axios part - one implication of that is you can't statically type the error response shapes (since exceptions can't be typed). Where as with fetch you can have a discriminated union based on the status code (eg: https://github.com/mnahkies/openapi-code-generator/blob/main...)
Although I do feel like I've seen too many instances of a 404 being used for an empty collection where it would make more sense to return `[]` and treat it as an expected (successful) state.
Generally true although 429 is often used for rate limiting so a back off and retry is appropriate. 409, 412, 428 may also be retriable depending on the specific semantics of the given situation. 421 apparently shows up commonly in HTTP/2 connection reuse and is retriable. 423 and 425 too potentially.
It would have been nice if there was an actually grouping of retriable and not retriable but in reality it’s a complete mess.
But at a minimum beware of 429. That’s not a permanent outage and is a frequent one you might get that needs a careful retry.
15% in absolute terms, 22% in per capita terms. And it is state policy to allow no more additional ICE cars in less than ten years, no net emissions in less than 20 years. Investing in a refinery today would obviously be folly.
I remember back in the day Heroku had a huge store of integrations that you could just turn on with a click and they worked like that. You'd get a New Relic account that was tailored to dyno performance and you accessed it via your Heroku dashboard.
It became "the way" a lot of these PaaS systems operated and I'm sure the goal was to get some percentage once you increased your usage from the free tier, which makes sense for the PaaS partner.
For sure, revshare is standard on those partnerships.
Fun(ny) fact: all the companies that started out on Heroku back then are still locked into those Heroku-captive tenant accounts on those partners, because contractually, the partner is not allowed to transition such an account to direct billing. One company I've worked with has had all their infra moved off Heroku for almost a decade, but their Sendgrid account, which has hundreds of subtenants that each have custom domains configured, still can only be logged into via Heroku. They'd have to rebuild that whole thing from scratch (including make all their customers redo DNS validation) to move to a real sendgrid account.
I'm sure Heroku earns Salesforce a really healthy revenue stream based on this.
I was JUST working on this based on the solarpunk forum article that was on Hackaday, yesterday.
I was trying to do Enigma with a captive portal wifi setup and a (heavily stylized) terminal in a browser. Figured a pi zero w2 might do the job on solar.
The idea was occasional fidonet pickup, some door games, and probably disabled file base. I could seed a few on friends and family porches in the neighborhood, and just see if anyone uses them.
It (enigma) was proving to be a real pain though, and I decided to shelve it for a bit.
Reticulum/Rnode is one option, but then mesh-core-tastic, signal, session, simplex, briar, whatschat, insta-don't sell a gram (no more e2ee) etc. etc. Let me know when the cool kids figure it out, you can reach me on my fossil instance over i2p :D
Actually ignore my comment I misunderstood the premise. I meant not vibe coding is the way to save time with production issues. Not the other way around!
reply