It is magical thinking to claim that LLMs are definitely physically incapable of thinking. You don't know that. No one knows that, since such large neural networks are opaque blackboxes that resist interpretation and we don't really know how they function internally.

You are just repeating that because you read that before somewhere else. Like a stochastic parrot. Quite ironic. ;)