All this, and yet, people are so angered by the term "stochastic parrot".
I use LLMs every day, I use Claude, Gemini, they're great. But they are very elaborate autocomplete engines. I'm not really shaking off that impression of them despite daily use .
It's weird. It's literally what they are. It's a gigantic mathematical function that takes input and assigns probabilities to tokens.
Maybe they can also be smart. I'm skeptical that the current LLM approach can lead to human-level intelligence, but I'm not ruling it out. If it did, then you'd have human-level intelligence in a very elaborate autocomplete. The two things aren't mutually exclusive.
People are hung up on what they “really” are. I think it matters more how the interact with the world. It doesn’t matter if they are really intelligent or not, if they act as if they are.