Hacker Newsnew | past | comments | ask | show | jobs | submit | quicklywilliam's commentslogin

Agreed, there is probably a theoretical world where we got enough money/compute together and had this explosion happen earlier.

Or perhaps a world where it happened later. I think a big part of what enabled the AI boom was the concentration of money and compute around the crypto boom.


not really. early deep learning models were run on single consumer-grade GPUs. the inflection occured _right_ when parallel computing became fast enough to do backprop in a reasonable amount of time with performance better than tree methods.

at that time all the compute resources in the world would not have been enough to train the models from even the last ~6 years or so, probably more.


This is a really cool list and repository of ideas. Seems like the focus of the work is on making knowledge legible to AI. I wonder if you (or others) have done a similar level thinking about the inverse – making AI more legible to humans?

Curious if you looked at using SwiftDown (https://github.com/qeude/SwiftDown), MarkupEditor (https://github.com/stevengharris/MarkupEditor) or any other libraries for live/WYSIWYG Markdown editing.

There’s been a lot of discussion lately about Anthropic and others turning the screws on their subscription plans, skyrocketing costs for enterprise customers. This is driving more and more folks to consider cheaper and non-proprietary models (which are getting more capable) for some of their tasks. This article goes into both of these trends.

I wrote a thing about why the current direction of AI tends to produce such terrible work — and what I think we could be doing with AI instead.

The real question is who bought the other 5,742 of them

I'm telling you, every time I see someone driving one it's the biggest dweeb alive

How is this deemed not a conflict of interest? SpaceX is almost totally funded by the government. You can't use government funds to buy your own products.

You must be new here

Well, clearly you can.

YouTube influencers who are showing off how successful they are and everyone should buy their courses to copy their success. Exhibit A for success: Cybertruck in background.

It's kind of weird. I haven't met a single person in real life that doesn't basically giggle out of embarrassment for the driver when a Cybertruck goes by. No one I know thinks they're cool in the slightest, and they're openly laughed at. It's bizarre that anyone thinks they're cool.

At least 5-10 people in the apartment I live in, plus more on the streets of Nashville.

Interesting read. I don't know if I quite buy the evidence, but it's definitely enough to warrant further investigation. It also matches up with my personal experience, which is that tools like Claude Code are burning through more and more tokens as we push them to do bigger and bigger work. But we all know the frontier model companies are burning through money in an unsustainable race to get you and your company hooked on their tools.

So: I buy that the cost of frontier performance is going up exponentially, but that doesn't mean there is a fundamental link. We also know that benchmark performance of much smaller/cheaper models has been increasing (as far as I know METR only looks at frontier models), so that makes me wonder if the exponential cost/time horizon relationship is only for the frontier models.


> But we all know the frontier model companies are burning through money in an unsustainable race to get you and your company hooked on their tools.

Do we? Because elsewhere in the thread there's people claiming they are profitable in API billing and might be at least close to break even on subscription, given that many people don't use all of their allowance.


Anthropic has 50% gross margins on their tokens.

Step 1) Bubble callers will be proven wrong in 2026 if not already (no excess capacity)

Step 2) Models are not profitable are proven wrong (When Anthropic files their S1)

Step 3) FOMO and actual bubble (say around 2028/29)


If they had such a high margin, they wouldn't need to fuck around with token usage/pricing every three days.

I have no data to support this, but I think they just about break even on API usage and take overall loss on subscriptions/free plans.


Math / Economics 101 thought experiment.

You have (limited) 100 Coke cans to sell (that you bought for say $1)

There are two large lines being formed for that. One line is offering an average $3 per bottle and another line is offering an average $2 per bottle.

Tell me which line they would throttle/starve even though they make a profit out of it.

Also, when the lines were formed you had no idea of the average price, but now you are getting a clear picture. Would you change your strategy / pricing or stick with your original "give the bottle to everyone for the same initial $1 price"


If I owned two lines both selling the same thing (preumably here Coke is a stand-in for compute), I would throttle the $2 dollar line. People without a choice might move to the $3 dollar line.

Unfortunately, back in the real world, Anthropic is dealing with two issues:

1. They're throttling all lines. Their latest model uses more tokens overall. Tokens are being rationed and context is being lowered.

2. There's another line for Pepsi right over there. And it costs $1.25 per can.

Anthropic should be lowering their price to compete with OpenAI, but they're not. They're making it even more expensive.

So tell me, does that really look like Anthropic is running a (as some people say) >50% profit margin?


for all you know, you think you are standing in the $3 line, but it really is $2 line and $3 line is BigCos, Govt and others who have guaranteed demand for several years.

Can we see them?

https://www.theinformation.com/articles/anthropic-lowers-pro...

I have access to that article

https://www.saastr.com/have-ai-gross-margins-really-turned-t...

Like I said, majority of people (including smart ones) are going to be surprised by the profit margins of AI labs and there will be a mad rush to buy AI stocks till it reaches bubble proportions.

2025 was merely a 1996 "Irrational Exuberance" moment. We haven't seen the late 1999 mania yet


Great idea and implementation! If you are hesitant to install this for any reason, you can accomplish the same thing with this one liner:

  sudo bioutil -ws -u 0; sleep 1; sudo bioutil -ws -u 1
Edit: here's a shortcut to run the above and then lock your screen. You can give it a global keyboard shortcut in the Shortcuts app. https://www.icloud.com/shortcuts/9362945d839140dbbf987e5bce9...

Hook this to a lid angle below 30° trigger in https://lowtechguys.com/crank and you can easily make it run on a simple lowering of the lid

At that point, why not just disable Touch ID?

When the bad guys are too impatient to wait until you leave the computer but not fast enough to stop you before 30 degrees while keeping the convenience of life.

There's been a ton of debate about whether we are in an AI bubble, but I've read a lot less about what will happen when the bubble bursts. I took it for granted that this is a bubble, and in particular that frontier AI companies are heavily distorting the market by subsidizing inference costs. Here's what might happen if those subsidies suddenly go away.

Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: