>I’ll share more details about where the Ghostty project will be moving to in the coming months. We have a plan but I'm also very much still in discussions with multiple providers (both commercial and FOSS).
what a cliff hanger!
As someone with similar warm feelings for GitHub, it's kind of sad to see the fragmentation but I have similar frustrations with the recent outages. Perhaps it's time to explore the idea of unbundling the social/discovery layer from the code hosting/dev tool so we can live between the myriad git/jj hosts but still do "social coding" together.
I'd guess the same has always been true for READMEs / human dev docs. Of course it doesn't transfer directly but still feels incredible to be in an age where we can measure such (previously) theoretical things with synthetic programmers.
Yeah isn't this is obvious? Bad docs create triple work: you do it wrong (1) you figure out it's not working because the doc is wrong (2) you do it the right way (3). Between 2 and 3 is figuring out what the right way is, which a good doc ideally shortcuts.
But obviously if you tell somebody "make a boiled egg. To boil an egg you have to crack it into the pan first." That's a lot worse than "make a boiled egg." Especially when you have an infinitely trusting, 0 common sense executor like an agentic model.
It would be really cool to do a causality investigation to determine which one of these boosts it so much / quantify how much each matters. Who knows, they may all interact in a sum-is-greater-than-parts way that only improves the score when shipped altogether.
Great point; I wasn’t sure if anyone would see this post so I spent most of my time on the docs. I just added a few visual examples to the post. Thanks!
This is singulitarian fallacies all over again like 'being able to make something smarter than a human means infintely smart, because it can just keep on making one smarter' while ignoring the multifaceted nature of intelligence, the time and other costs involved in creation and the costs. It just gets handwaved away as superintelligence somehow enabling goddamned sorcery to ignore physical constraints. Except reality does not work that way.
It reminds me of the 'Einstein's superintelligent cat' refutation to such fallacies. It went something like this: imagine Einstein has a superintelligenct cat. The room has only one door and it is locked. The cat is not capable of opening the lock due to lack of manual dexterity. The cat does not want to go into the carrier. Einstein is however an order of magnitude greater in mass. As much as the cat might want to escape Albert Einstein's grip he cannot. The superintelligent cat is going in the carrier.
The point being that, no, controlling or creating AI does not in fact equate to controlling society no matter how smart it gets. Even if we were so incredibly stupid to wire it up to be able to actually control an entire munitions factory it still can't take over society, and it only takes one bombing run or called in artillery strike to end the situation.
Yet in the real world we can trust private ownership of firearm factories, missile factories, and tank factories without a serious risk of a coup. Yet somehow AI is supposed to be what makes them a god-king? It strains credulity.
These arguments have been going on for more than a decade and have been silly the whole time.
> It reminds me of the 'Einstein's superintelligent cat' refutation to such fallacies.
One (of the many) problem(s) with this "refutation" is that in reality not only does nobody bother to lock the superintelligent cat in room and leave it no available actions, but you're lucky if they don't hook the cat up directly to the internet. It doesn't matter whether you could maybe control a superintelligence, if you were very careful and treating it very seriously, when nobody is even trying, much less being very careful.
Hi, the creator of crust here, the binary size varies between platforms. With the hello world cli, the smallest, on darwin-arm64 it is 58.1M, the largest on windows-x64 is 109M. hope this helps!
reply