> MIDA executive director Paul Morris told county commissioners that the facility “will not take one electron” from the existing grid
Cool. Will it drive up the price of the existing grid's existing electrons though? It'll increase demand for the natural gas that keeps those electrons moving.
> If he hasn't, then why would he refuse to say so?
Because he doesn't need to nor should he respond to a blogger? We continually point out that no one should ever talk to the police, the same absolutely goes for the media, particularly when you're a civil servant.
Sometimes "to either x or y" doesn't mean "to do one of x or y", sometimes it means "to be able to do both x and y (but not necessarily at the same time purposefully)".
If you were driving on an unmarked, unbarricaded bridge that Google Maps directed you over in a dark and rainy night, are you 100% certain you'd be driving slowly, undistracted, and checking to make sure the bridge isn't collapsed?
This analogy doesn't work because you can assume that by a bridge existing, and not having traffic cones/barriers, it's probably built by humans and is fit for use (ie. isn't half built). The same doesn't exist for LLM outputs, which is wholly generated by AI. If I was in some simulation where the environment is vibecoded by AI, I'd be very careful too.
That's kind of what I was trying to say, or at least it kind of goes along with it. This meme of "somebody drove into a river just because Google Maps told them to" is a grossly distorted retelling of a fatal accident. One could twist any tragedy into a glib soundbite about how the dead stupidly trusted other people. The street could collapse under my feet as I'm crossing it and I drown in the sewer, and people on the internet would be laughing about how I dived into the sewer just because a traffic light told me to. There were some cracks in the asphalt, so obviously I should have known it wasn't safe to walk across, but I wasn't thinking for myself.
I suppose part of the reason so many people are so dangerously trustful of LLMs is because they assume that if the LLM was put out there by decently responsible humans (doubtful, but understandable), then so too should the LLM be decently responsible? The analogy does break down there.
Yeah... Non-sentient monkey "organ sacks" as a replacement for animal testing sounds great, but those organs aren't going to function or even develop the same without a brain. At best, I think this could only be another step to filter out unsafe compounds between testing on cells and testing on whole animals. Potentially with misleading results, I imagine.
Could you give a concrete example or two of what exactly this system does? Like, what's a scientific result or two it has formally mathematically proved?
It would seem their service identifies only phishing sites as legitimate ones. It would seem 100% of sites they deem legitimate are phishing sites. Incredible.
I find it hard to imagine that the people in a position to kill those processes could ever be that zealously in love with AI, but recent events have given me a tiny bit of doubt.
I mean in the cases where higher command has said launch your nukes and lower command has not done so and everything turned out ok, I think to higher command it of course is good it worked out this time but it certainly also looks like a problem with the system that needs to be automated away. So a computer that will launch all nukes when ordered must look very appealing in contrast to humans who might save humanity.
The ones who give it free reign to run any code it finds on the internet on their own personal computers with no security precautions are maybe getting a little too excited about it.
Cool. Will it drive up the price of the existing grid's existing electrons though? It'll increase demand for the natural gas that keeps those electrons moving.
reply