Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I would doubt it. They are mostly trained on natural language. They may be getting some visual reasoning capability from multi-modal training on video, but their reasoning doesn't seem to generalize much from one domain to another.

Some future AGI, not LLM based, that learns from it's own experience based on sensory feedback (and has non-symbolic feedback paths) presumably would at least learn some non-symbolic reasoning, however effective that may be.

 help



My argument for this is mostly that we don't use language for all forms of reasoning, and are likely doing so on some internal representations or embeddings. Animals also demonstrate abilities to reason with situations without actually having a language.

I see language more as a protocol for inter-agent communication (including human-human communication) but it contains a lot of inefficiencies and historical baggage and is not necessarily the optimal representation of ideas within a brain.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: