Hacker Newsnew | past | comments | ask | show | jobs | submit | hunterpayne's commentslogin

"Quite honestly the firings that are happening are the ones who are not adopting the technologies, "

And there is why the hate exists. You as the CEO know nothing about how your business works. You neither actually try to understand nor do you have the technical background to understand. So you substitute gamed numbers. And in doing this, you setup your company to tank the industry that props up the world economy. And then act like you are the rational one while doing this. There is nothing rational about how most CEOs act. There is a reason why companies do better under dev founders than any other circumstance. There is a reason why dev CEOs do better than non-dev CEOs. Yet despite this, you will tank both your company and a substantial part of the industry just so you can get yours. That's why you are getting the hate. Ignorant indifference is just as objectionable as the caricature of a CEO you see in these posts.


>You as the CEO know nothing about how your business works. That's too broad of a statement and quite honestly, in my opinion and experience, wrong. There may be some who this does apply to but the vast majority that I know, it does not, and when you get to companies that are actually doing firings based on this its even less.

That isn't what the article is talking about. He is literally talking about running the JS code through an obfuscator and changing the default DB schema name. That isn't security and you know it.

"How can we solve this at a more fundamental level?"

Stop using AI for coding. Period...there is no other solution. You can't make it work, nobody else can either. Without determinism, the entire process is useless. We need to stop trying to act like we all know that this isn't true. We have given it a chance, it failed, time to move on to something else no matter how much the VCs and execs don't want to. Those that do move on have a chance, the others have no future in software.


The issue is that you will end up without a job if the trend continues. It's similar to many cases of technical innovation - you can still have a few workers who do handcrafted works, but most of them have to use the machines, that may produce work of inferior quality but at much higher speed.

The market realigns, and unless you handwrite the highest possible quality at a quick pace, you won't be competitive with the vibe-coders who can fix a hundred issues a month.

It was the same with gps-assisted driving, now most people can't orient themselves autonomously. Worse, there are no roadsigns with directions installed, meaning that you are stuck with using the GPS.


"unless you handwrite the highest possible quality at a quick pace"

That's exactly what I do. I know I am lucky to be gifted in this skillset. But that's not a good reason to excuse people destroying the market for everyone.


I mean, why wouldn't they? Software developers are very often automating things, in order to reduce the amount of workforce, or the skill needed for economic activities.

Would you refuse to work on a navigation app such as Waze, since it allows everyone to work with platforms as Uber and pushes traditional taxis out of the market?


I agree with what you are saying, but if I cannot get work, I may literally have nothing to eat tomorrow.

So while I agree with your point, it does not feel like a practical answer for my situation. For someone who is already well known and has enough reputation, refusing to use AI may be a matter of principle. But I am dealing with survival.

I do not think your answer is bad. But because this is a survival problem, it is difficult for me to risk everything on principle.

In other words, I know that your answer may be the morally correct one. If everyone boycotted this, perhaps it would not be adopted so aggressively.

But I cannot do that.

What I need is a way to use AI while degrading my own ability as little as possible, and while still preserving my skills.

I am not saying you are wrong. I am saying that your answer is too idealistic for someone in my position.


I'm not being idealistic. I'm being very practical. You have the survival problem exactly backwards. Continuing to use it, that's the real danger in a practical sense. That only leads one direction and that direction isn't in your best interest.

It was a ghost town last time I was there about 6 months ago. You could walk down the middle of Market St at 9am on a weekday because its so empty. The only time I ever saw SF as empty was 9/11/01.

That's because market street is closed to cars now. It's essentially a pedestrian corridor with the trolley/busses.

They made that change 10 years ago. There were still plenty of cars on it until 2020. Why are you lying to people?

You posted this about 19 days ago: "This is the country that takes a 2 hour nap every day. They also have a sleeping contest every year with a winner and everything. And Spain isn't hot like Mexico where folks take 2 hours off in the topically heat and make it up for it in the evening because that's more efficient."

You are a fucking moron, not only have you no idea about my country but you revel on your ignorance and preen publicly about its inferiority as confabulated by you. You are a disgusting piece of waste and I hope people like you in your country continue their path off the cliff of relevance and power they have chosen. I find you repugnant to the extreme.


I lived in Spain for a time during their summer. Don't like the truth, don't go on the Internet.

"which tracks for a non-expert"

So all agents then...because if you are an expert at a specific system, using a LLM probably slows you down, not speeds you up.

PS The article seems to imply that the token the LLM was given was a role based token. It then found ANOTHER token and used that instead.


Agree. My point is that other secret should have been inaccessible without an escalation. The fact that it was available to the agent implies a lack of basic security controls; in fact I would expect that an agentic workload would have even more robust compensating controls.

"If I understand correctly, "

You don't. You are missing the part where the LLM had a token which blocked access as expected. Then the LLM searched the source base, found a different token with the delete privs and then used that.

PS That warning happens in staging envs too, the LLM doesn't know which env is which by design.


Huh that's not what I gathered from the tweet at all. If I am going to write a five why's analysis, the immediate cause is the LLM wrongly decided to delete a volume, while the root cause is the bad design to co-locate staging and production data in the same volume. The writing was quite vague though, let's wait for a response from railway.

That's not what happened.

if an api key with full perms was put in a place where the agent can access it, that is the biggest problem.

that somebody made a key thst can delete prod when they dont need to delete prod is the underlying problem with that

and underlying that still is that the staging environments were on the same account as prod.


You’re very defensive in these comments - are you the author?

Because its cheaper to hire a bot farm to spam comments on articles like this than to actually write well engineered software?

"Never store secrets on disk."

Wait till you learn how that API stores cryptographic material.


What's your point? Obviously, a secure server storing encrypted data on disk in a manner where it is only accessible through a secured API is not what is being discussed here.

how do you think the LLM will do required operations when the secrets are stored somewhere other than the disk? It will still need to get them just like the application gets them when it has to do work.

> how do you think the LLM will do required operations when the secrets are stored somewhere other than the disk

Using a secret manager API? I'm not sure what you're getting at.


The LLM can use the secret manager API too, it sees how it's used in the application

No, what is fake are all the people defending the LLM. Wait...that means I'm replying to a bot

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: