Some people are still stuck in the “stochastic parrot” phase and see everything regarding LLMs through that lense.

Current LLMs do not think. Just because all models anthropomorphize the repetitive actions a model is looping through does not mean they are truly thinking or reasoning.

On the flip side the idea of this being true has been a very successful indirect marketing campaign.

What does “truly thinking or reasoning” even mean for you?

I don’t think we even have a coherent definition of human intelligence, let alone of non-human ones.

Everyone knows to really think you need to use your fleshy meat brain, everything else is cheating.