Hacker Newsnew | past | comments | ask | show | jobs | submit | Schlagbohrer's commentslogin

I always thought they should make a thin, metal, foldable 83 variant that just bends in the middle and looks like a cigarette case.

Geoffrey Hinton said that the breakthrough the AlphaGo team had was getting it to play against itself and improve in that means, since it could then go beyond the human training data it had learned on. He said that an equivalent form of self-training for generalized information would let a superintelligence take off (this is from my memory, not an exact quote).

The TechCrunch article doesn't specify how/what kind of data a recursive general AI could use to achieve such a thing. If it is possible that's exciting. Seems like a real philosophical question to answer- How could a general AI self-train?


Silver has a couple of recent papers that probably give an idea of what they are up to:

>...Here we show that it is possible for machines to discover a state-of-the-art RL rule that outperforms manually designed rules. This was achieved by meta-learning from the cumulative experiences of a population of agents across a large number of complex environments... https://www.nature.com/articles/s41586-025-09761-x

and

>A new generation of agents will acquire superhuman capabilities by learning predominantly from experience. This note explores the key characteristics that will define this upcoming era. https://storage.googleapis.com/deepmind-media/Era-of-Experie...


Same question. Given the people involved I am inclined to take it seriously, given the money and hype involved I am somewhat less inclined.

Their website doesn’t even have a hint of what the approach is.


Yesterday i watched a video stating that evolutionary algoritm become more relevant in machine learning again.

But if you think about our brain, if you learn something new, you play with it and recall it and challange the new information. Perhaps we can build something similiar. A model adjusting itself until its perfect.


but we already know it's not. self training leads to model collapse

You'd probably have to embody it.

Or simulate embodiment.

Human learning relies on movement. We learn as we navigate the world and interact with objects as noted by Jeff Hawkins in "A Thousand Brains".

It may be an AI will also require this ability, either via simulation or a robotic interface. Will an AI then also lose its train of thought when it walks through a doorway and context (reference frame) switches?


> If it is possible that's exciting.

Would it be exciting though? I mean it would certainly excite some things, but I don’t know that it would be something to rejoice.


Why wouldn't it be exciting? It would learn through logic and reason as opposed to our faulty human artifacts. And it wouldn't be limited to what we currently know. A good test would be if it could rediscover mathematics or relativity.

"pre-money valuation" I don't know what that means but it makes me roll my eyes so hard it hurts

Post-Money = Pre-Money + Investment

So pre-money in this case is their valuation even before they've received any investment.


Tell us more about that fraud story! Was the person your attorney or accountant? Or just some "smart" person who decided to wisely save time by doing fraud?

It was a fund administrator. I still find it unbelievable that they would so casually do this. And yes, they thought they were very smart... and helpful too...

Excerpt:

The Phenomenon of "Flaming"

Perhaps the attribute of electronic mail systems that most distinguishes them from other forms of communication is their propensity to evoke emotion in the recipient — very likely because of misinterpretation of some portion of the form or content of the message — and the likelihood that the recipient will then fire off a response that exacerbates the situation.


Especially when the OpenAI definition of AGI is only in financial terms (when it becomes profitable), which can be easily manipulated.

Extremely hard to believe that MSFT would have any hesitancy about working with the US government.

Also it's about OpenAI going public.

Yes! Same logic as the financials, in which the companies pass back and forth the same $200 Billion promissory note.

The AGI talk is shocking but not surprising to anyone looking at how bombastic Sam Altman's public statements are.

The circular economy section really is shocking- OpenAI committing to buying $250 Billion of Azure services, while MSFT's stake is clarified as $132 Billion in OpenAI. Same circular nonsense as NVIDIA and OpenAI passing the same hundred billion back and forth.


Dennis: I think we made every single one of our Paddy's Dollars back, buddy.

Mac: You're damn right. Thus creating the self-sustaining economy we've been looking for.

Dennis: That's right.

Mac: How much fresh cash did we make?

Dennis: Fresh cash! Uh, well, zero. Zero if you're talking about U.S. currency. People didn't really seem interested in spending any of that.

Mac: That's okay. So, uh, when they run out of the booze, they'll come back in and they'll have to buy more Paddy's Dollars. Keepin' it moving.

Dennis: Right. That is assuming, of course, that they will come back here and drink.

Mac: They will! They will because we'll re-distribute these to the Shanties. Thus ensuring them coming back in, keeping the money moving.

Dennis: Well, no, but if we just re-distribute these, people will continue to drink for free.

Mac: Okay...

Dennis: How does this work, Mac?

Mac: The money keeps moving in a circle.

Dennis: But we don't have any money. All we have is this. ... How does this work, dude!?

Mac: I don't know. I thought you knew.


Great scene

You forgot the best line: "I don't know how the US economy works, much less some kind of self-sustaining one".

Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: