Agreed, there is probably a theoretical world where we got enough money/compute together and had this explosion happen earlier.
Or perhaps a world where it happened later. I think a big part of what enabled the AI boom was the concentration of money and compute around the crypto boom.
not really. early deep learning models were run on single consumer-grade GPUs. the inflection occured _right_ when parallel computing became fast enough to do backprop in a reasonable amount of time with performance better than tree methods.
at that time all the compute resources in the world would not have been enough to train the models from even the last ~6 years or so, probably more.
This is a really cool list and repository of ideas. Seems like the focus of the work is on making knowledge legible to AI. I wonder if you (or others) have done a similar level thinking about the inverse – making AI more legible to humans?
There’s been a lot of discussion lately about Anthropic and others turning the screws on their subscription plans, skyrocketing costs for enterprise customers. This is driving more and more folks to consider cheaper and non-proprietary models (which are getting more capable) for some of their tasks. This article goes into both of these trends.
How is this deemed not a conflict of interest? SpaceX is almost totally funded by the government. You can't use government funds to buy your own products.
YouTube influencers who are showing off how successful they are and everyone should buy their courses to copy their success. Exhibit A for success: Cybertruck in background.
It's kind of weird. I haven't met a single person in real life that doesn't basically giggle out of embarrassment for the driver when a Cybertruck goes by. No one I know thinks they're cool in the slightest, and they're openly laughed at. It's bizarre that anyone thinks they're cool.
Interesting read. I don't know if I quite buy the evidence, but it's definitely enough to warrant further investigation. It also matches up with my personal experience, which is that tools like Claude Code are burning through more and more tokens as we push them to do bigger and bigger work. But we all know the frontier model companies are burning through money in an unsustainable race to get you and your company hooked on their tools.
So: I buy that the cost of frontier performance is going up exponentially, but that doesn't mean there is a fundamental link. We also know that benchmark performance of much smaller/cheaper models has been increasing (as far as I know METR only looks at frontier models), so that makes me wonder if the exponential cost/time horizon relationship is only for the frontier models.
> But we all know the frontier model companies are burning through money in an unsustainable race to get you and your company hooked on their tools.
Do we? Because elsewhere in the thread there's people claiming they are profitable in API billing and might be at least close to break even on subscription, given that many people don't use all of their allowance.
You have (limited) 100 Coke cans to sell (that you bought for say $1)
There are two large lines being formed for that. One line is offering an average $3 per bottle and another line is offering an average $2 per bottle.
Tell me which line they would throttle/starve even though they make a profit out of it.
Also, when the lines were formed you had no idea of the average price, but now you are getting a clear picture. Would you change your strategy / pricing or stick with your original "give the bottle to everyone for the same initial $1 price"
If I owned two lines both selling the same thing (preumably here Coke is a stand-in for compute), I would throttle the $2 dollar line. People without a choice might move to the $3 dollar line.
Unfortunately, back in the real world, Anthropic is dealing with two issues:
1. They're throttling all lines. Their latest model uses more tokens overall. Tokens are being rationed and context is being lowered.
2. There's another line for Pepsi right over there. And it costs $1.25 per can.
Anthropic should be lowering their price to compete with OpenAI, but they're not. They're making it even more expensive.
So tell me, does that really look like Anthropic is running a (as some people say) >50% profit margin?
for all you know, you think you are standing in the $3 line, but it really is $2 line and $3 line is BigCos, Govt and others who have guaranteed demand for several years.
Like I said, majority of people (including smart ones) are going to be surprised by the profit margins of AI labs and there will be a mad rush to buy AI stocks till it reaches bubble proportions.
2025 was merely a 1996 "Irrational Exuberance" moment. We haven't seen the late 1999 mania yet
When the bad guys are too impatient to wait until you leave the computer but not fast enough to stop you before 30 degrees while keeping the convenience of life.
There's been a ton of debate about whether we are in an AI bubble, but I've read a lot less about what will happen when the bubble bursts. I took it for granted that this is a bubble, and in particular that frontier AI companies are heavily distorting the market by subsidizing inference costs. Here's what might happen if those subsidies suddenly go away.
Or perhaps a world where it happened later. I think a big part of what enabled the AI boom was the concentration of money and compute around the crypto boom.
reply