I have found it depends on how comfortable the earplugs are. If I feel they are uncomfortable in the ear, there is a good chance I'll get an infection/inflammation in the next few days.
In 2000 I learned about this old technology called "neural networks".
AI really depends on long winters and rare breakthroughs. Deep neural network was the most recent breakthrough.
The iterations you currently see it just adding more storage, but the fundamental neural network structure doesn't change.
I'm confident AGI will not be achieved by the LLM architecture, and when the next AI breakthrough is, is anyones guess. But if you take history into account, it will take a while.
Yes, same. In the late 90s through early aughts then I was taught over and over and over again that neural networks were a dead end concept and would never amount to anything.
Just like all the preceding AI booms, this one will hit its maximal point, the hype train will fizzle, the best parts will just become "normal", and then a couple of decades later something new will come to push the boundary again.
It depends on the purpose for the model. AFAIK LLMs aren't particularly capable at researching answers, relying more on having 'truth' baked in to their weights, so if it takes 12 months to train up a crowd-trained LLM it'll be 12 months behind the times.
How serious a risk is poisoned weights?
Can we leverage the cryptobros into using LLM training as a proof of work?
Does Qwen3.5 know it needs to do this because the API in question has had loads of churn and much of its training data is on obsolete versions, or do you need to prompt it? How well does it handle having an API reference with sample code in its context window?
Having an LLM use a web search tool isn't the same thing as researching a topic, IMO, because it's so ephemeral and needs constant reinforcement. LLMs aren't learning machines, they're static ones.
I don't think this is giving up. He's getting inside information on how Claude works, and a huge stream of Claude usage data. This will all inform future grok development, IMO.
reply