Not OP, but I think the argument here would be not that LLMs "are not smart" but that smart is just the wrong category of thing to describe an LLM as.

A calculator can do very complex sums very quickly, but we don't tend to call it "smart" because we don't think it's operating intelligently to some internal model of the world. I think the "LLMs are AGI" crowd would say that LLMs are, but it's perfectly consistent to think the output of LLMs is consistent/impressive/useful, but still maintain that they aren't "smart" in any meaningful way.

> "we don't think it's operating intelligently to some internal model of the world"

Okay, but you have to actually address why you think LLMs lack an "internal model of the world"

You can train one on 1930s text, and then teach it Python in-context.

They've produced multiple novel mathematical proofs now; Terrance Tao is impressed with them as research assistants.

You can very clearly ask them questions about the world, and they'll produce answers that match what you'd get from a "model" of the world.

What are weights, if not a model of the world? It's got a very skewed perspective, certainly, since it's terminally online and has never touched grass, but it still very clearly has a model of the world.

I'd dare say it's probably a more accurate model than the average person has, too, thanks to having Wikipedia and such baked in.