Geoffrey Hinton said that the breakthrough the AlphaGo team had was getting it to play against itself and improve in that means, since it could then go beyond the human training data it had learned on. He said that an equivalent form of self-training for generalized information would let a superintelligence take off (this is from my memory, not an exact quote).
The TechCrunch article doesn't specify how/what kind of data a recursive general AI could use to achieve such a thing. If it is possible that's exciting. Seems like a real philosophical question to answer- How could a general AI self-train?
Silver has a couple of recent papers that probably give an idea of what they are up to:
>...Here we show that it is possible for machines to discover a state-of-the-art RL rule that outperforms manually designed rules. This was achieved by meta-learning from the cumulative experiences of a population of agents across a large number of complex environments... https://www.nature.com/articles/s41586-025-09761-x
Yesterday i watched a video stating that evolutionary algoritm become more relevant in machine learning again.
But if you think about our brain, if you learn something new, you play with it and recall it and challange the new information. Perhaps we can build something similiar. A model adjusting itself until its perfect.
Human learning relies on movement. We learn as we navigate the world and interact with objects as noted by Jeff Hawkins in "A Thousand Brains".
It may be an AI will also require this ability, either via simulation or a robotic interface. Will an AI then also lose its train of thought when it walks through a doorway and context (reference frame) switches?
Why wouldn't it be exciting? It would learn through logic and reason as opposed to our faulty human artifacts. And it wouldn't be limited to what we currently know. A good test would be if it could rediscover mathematics or relativity.
Tell us more about that fraud story! Was the person your attorney or accountant? Or just some "smart" person who decided to wisely save time by doing fraud?
It was a fund administrator. I still find it unbelievable that they would so casually do this. And yes, they thought they were very smart... and helpful too...
Perhaps the attribute of electronic mail systems that most distinguishes them from other forms of communication is their propensity to evoke emotion in the recipient — very likely because of misinterpretation of some portion of the form or content of the message — and the likelihood that the recipient will then fire off a response that exacerbates the situation.
The AGI talk is shocking but not surprising to anyone looking at how bombastic Sam Altman's public statements are.
The circular economy section really is shocking- OpenAI committing to buying $250 Billion of Azure services, while MSFT's stake is clarified as $132 Billion in OpenAI. Same circular nonsense as NVIDIA and OpenAI passing the same hundred billion back and forth.
reply