I consider it the governments job to keep the population in a state where they can have a net benefit to society. Obviously this needs to be balanced with personal freedoms.
The gamble is that you can cruise on the senior engineer’s diminishing understanding for a few years until models become good enough that you don’t need any humans in the loop and you can fire all those expensive seniors.
The tragedy is having a bunch of those senior engineers writing blog posts and what not of how productive they are, without realising that it means business now needs less of them.
I suppose that if you don’t believe that models will be good enough to work completely without senior engineer help, positioning yourself as a master prompter is a good move to improve your chances of not getting fired.
If all you have are being good at prompting you are gone. Business is going to prefer new grads who have taken some class in ai prompting (which many schools now offer). Doesn't matter if the class in ai prompting isn't any good. What matters is the idea that they had a formal training in this thing, and that they are willing to work for far less pay than any senior.
Safety critical software is mostly a compliance dance that incidentally produces artifacts with lower defect rates than usual. LLMs can help with safety critical code as long as a human signs their name that they are responsible for its behavior.
When I'm sitting in the plane that has CAS firmware, I'd like to think it wasn't written by an LLM and that my death in the case of a CAS failure isn't chalked up to "some engineer somewhere gets in trouble".
There probably already is generated code in there, only it was generated from UML. I don’t think that LLM generated code will be treated differently from the point of view of the relevant regulations.
That doesn’t matter. Once the code is generated it doesn’t change. The reviewed artifact in a safety critical codebase is the last abstraction layer before a fully certified compilation pipeline. So usually it’s not the UML but the generated code.
Does that matter that much in practice? I bet lots of costumers are okay with software that crashes 10x as much if it costs 10x less. There already is a ton of shitty software that still sells.
Politicians should probably not use Signal but something that is controlled by the government and for example doesn’t allow „accidentally“ deleting incriminating messages.
If politicians would be effectively controlled by the government and not by some independent party those mysterious, oops, accidentally deleted it problems would increase.
Designing infrastructure and processes to reduce human error is a very important topic.
With the recent AI boom, many engineers are working on figuring out how to build infrastructure that reduces AI errors. We rediscover many techniques (eg writing good specs) that were once used for humans, but then abandoned.
I would argue that subsidized solar panels and batteries from China are the the most important factor. If renewables weren’t economically competitive we’d see approximately zero deployment.
Not to forget storage solutions have become viable as well. Generating renewable power is only part of the equation. It has a large variable that needed to be filled for the equation to fully compute waiting for storage.
Yes but you could argue that one key driving force behind their policies is the knowledge that a Republican president was going to mess up their oil supply sooner or later.
reply