Full disclosure: all my published work is on symbolic machine learning (a.k.a. Inductive Logic Programming) :O
I think you're confusing various different things as "neurosymbolic AI". There is a NeSy symposium and I happen to have met many of the people there, and they are not GOFAI ideologues, rather they recognise the obvious limitations of neural nets (i.e. they're crap at deduction, though great at induction) and they look for ways to address them. Most of that crowd also has a predominantly statistical ML/ neural nets background, with symbolic AI as an afterthought.
I don't think I've ever heard anyone say that "ML is not real AI" and I mainly move in symbolic AI circles. I would check my sources, if I were you.
Anwyay, honestly, this is 2026, there is no sensible reason to be polarised about symbolic vs. statistical AI (or whatever distinction anyone wants to make). An analogy I like to make is as follows: a jetliner is a flying machine, a helicopter is a flying machine. We can use both for their advantages and disadvantages, but a flying machine is something too useful to give up on any one kind for ideological reasons. The practical benefits overwhelmingly make up for any ideological concerns (e.g. "jets bad" or "propellers bad").
And just to be clear, symbolic AI is still in rude health: automated theorem proving, planning and scheduling, program verification and model checking, constraint satisfaction, discrete optimisation, SAT solving, all those are fields where symbolic approaches are dominant, and where neural nets have not made significant inroads in many decades; nor are they likely to, not any more than symbolic approaches are likely to make any inroads in e.g. machine vision, or speech recognition. And that's just fine: lots of tools, lots of problems solved.
I don't think symbolic approaches are completely useless. It's just that they're solving yesterday's problems 1.12% better. While ML is cracking open entirely new fields - and might go all the way to AGI, the way it's going now.
One is near the end of its potential while another is only picking up steam.
In many ways, the space ML dominates now is the space of "all the things symbolic approaches suck ass at". Which is a very wide space with many desirable things in it.
Well, neural nets do what neural nets do best (not ML in general, which is a broader field), so if a lot of funding is going to neural nets then we'll see a lot of progress on the stuff neural nets are best suited for. No surprise. If Google et al were spending billions on symbolic AI maybe we'd see equally spectacular results there too. Maybe not. But we won't know because they don't.
There's no sense in which symbolic AI is at the end of its life and if you pay close attention you'll see that LLMs are trying to do all the things that symbolic AI is good at: major examples being reasoning, and planning from world models.
And as nextos says in the sibling comment most of the recent successes of LLMs in tasks that go beyond language generation, e.g. solving math olympiad problems, are the result of combining LLMs with symbolic verifiers.
>> While ML is cracking open entirely new fields - and might go all the way to AGI, the way it's going now.
I don't agree. Everything that neural nets do today, speech recognition, object identification in images, machine translation, language generation, program synthesis, game playing, protein folding, research automation, I mean every single thing really, is a task that comes from the depths of AI history. There's a big discussion to be had about why those tasks are "AI" tasks in the first place and what they have to do with "intelligence" in the broader sense (e.g. cats are intelligent but they can't generate any sort of text) but this discussion is constantly postponed as we all breathlessly run up the hill that neural nets are climbing. When we get to the top and find it was the wrong hill to climb, maybe we'll have that discussion at last, or maybe the entire industry, academia in tow, will run after the Next Big Thing in AI™ all over again. But- cracking open new fields? Nah. Not really.
AGI is not going to happen any time soon though. We have no idea what we're doing in terms of reproducing intelligence, that much is clear.
The whole notion of "we need to know what intelligence is exactly to reproduce it" is completely and utterly wrong.
It's also the kind of thinking that results in "neurosymbolic garbage is good actually".
What neural nets do today is basically "everything humans do". There is no longer a list of "things computers can't do" - just a list of things computers do worse than the top 1% of humans. Ever shrinking.
Well, for example a computer can't make me an omelette. There's tons of examples like that, pretty much everything humans "can do" with our bodies, that computers can't- not just because they don't have bodies, but because even when we give them bodies we can't program them to do the things we want them to. LLMs don't help at all here. They can easily fake knowing what to do but the -not few- attempts people have made to connect LLMs to a robot to get the LLM to drive the robot like a little AI brain have ... not really worked out? I guess? Not even self-driving cars use LLMs.
Speaking of self-driving cars' AIs, while they have plenty of machine learning components, e.g. for vision, SLAM, and so on, they are largely hand-coded, rule-based systems. Just like the good old days of GOFAI.
>> The whole notion of "we need to know what intelligence is exactly to reproduce it" is completely and utterly wrong.
I think you're confusing various different things as "neurosymbolic AI". There is a NeSy symposium and I happen to have met many of the people there, and they are not GOFAI ideologues, rather they recognise the obvious limitations of neural nets (i.e. they're crap at deduction, though great at induction) and they look for ways to address them. Most of that crowd also has a predominantly statistical ML/ neural nets background, with symbolic AI as an afterthought.
I don't think I've ever heard anyone say that "ML is not real AI" and I mainly move in symbolic AI circles. I would check my sources, if I were you.
Anwyay, honestly, this is 2026, there is no sensible reason to be polarised about symbolic vs. statistical AI (or whatever distinction anyone wants to make). An analogy I like to make is as follows: a jetliner is a flying machine, a helicopter is a flying machine. We can use both for their advantages and disadvantages, but a flying machine is something too useful to give up on any one kind for ideological reasons. The practical benefits overwhelmingly make up for any ideological concerns (e.g. "jets bad" or "propellers bad").
And just to be clear, symbolic AI is still in rude health: automated theorem proving, planning and scheduling, program verification and model checking, constraint satisfaction, discrete optimisation, SAT solving, all those are fields where symbolic approaches are dominant, and where neural nets have not made significant inroads in many decades; nor are they likely to, not any more than symbolic approaches are likely to make any inroads in e.g. machine vision, or speech recognition. And that's just fine: lots of tools, lots of problems solved.