> LLMs just spew words. It just so happens that human beings can decode them into something related, useful, and meaningful surprisingly often.

This sentence is inherently contradictory. If LLM output is meaningful more than chance, then it's literally not "just spewing words". Therefore whatever model it is using to generate that meaning must contain some semantic content, even if it's not semantic content that's as rich as humans are capable of. The "stochastic parrot" term is thus silly.

It's a sufficiently large N shannonizer. Nothing more

so are we, maybe.

Definitely true at some level:

https://en.wikipedia.org/wiki/Predictive_coding