When OP wrote about LLMs "thinking" he implied that they have an internal conceptual self-reflecting state. Which they don't, they *are* merely next token predicting statistical machines.

This was true in 2023.

And it still is today.