Agree. LLMs operate in the domain of language and symbols, but the universe contains much more than that. Humans also learn a great deal from direct phenomenological experience of the world, even without putting those experiences into words. I remember a talk by Yann LeCun where he pointed out that in just the first couple of years of life, a human baby is exposed to orders of magnitude more sensory data (vision, sound, etc.) than what current LLMs are typically trained on. This seems like a major limitation of purely language-based models.