I said this pretty much and got major downvotes…

Because it's an outmoded cliche that never held much philosophical weight to begin with and doesn't advance the discussion usefully. "It's a stochastic parrot" is not a useful predictor of actual LLM capabilities and never was. Last year someone posted on HN a log of GPT-5 reverse engineering some tricky assembly code, a challenge set by another commentator as an example of "something LLMs could never do". But here we are a year later still wading through people who cannot accept that LLMs can, in a meaningful sense, "compute".

It’s entirely useful discussion because as soon as you forget that it’s not really having a conversation with you, it’s a deep dive into delusion that you’re talking to a smart robot and ignoring the fact that these smart robots were trained on a pile of mostly garbage. When I have a conversation with another human, I’m not expecting them to brute force an answer to the topic. As soon as you forget that Llms are just brute forcing token by token then people start living in fantasy land. The whole “it’s not a stochastic parrot” is just “you’re holding it wrong”.

Its not that LLMs are stochastic parrots and humans are not. Its that many humans often sail through conversations stochastic parroting because they're mentally tired and "phoning it in" - so there are times when talking to the LLM, which has a higher level of knowledge, feels more fruitful on a topic than talking to a human who doesn't have the bandwidth to give you their full attention, and also lack the depth and breadth of knowledge. I can go deep on many topics with LLMs that most humans can't or won't keep up on. In the end, I'm really only talking to myself most of the time in either case, but the LLM is a more capable echo, and it doesn't tire of talking about any topic - it can dive deep into complex details, and catching its hallucinations is an exercise in itself.

No. It's quite a useful thing to understand So, what, you have us believe it is a sentient, thinking, kind of digital organism and you would have us not believe that it is exactly what it is? Being wrong and being unimaginative about what can be achieved with such a "parrot" is not the same as being wrong about it be a word predictor. If you don't think, you can probably ask an LLM and it will even "admit" this fact. I do agree that it has become considered to be outmoded to question anything about the current AI Orthodox.

People are upset hearing that LLMs aren't sentient for some reason. Expect to be downvoted, it is okay.

First off, "not adequately described as a mere token-predictor" and "not sentient" are entirely separate things.

I can't speak for anyone else, but what I feel when I read yet another glib "it's just a stochastic parrot, of course it isn't doing anything that deserves to be called reasoning" take is much more like bored than it is like upset.

Today's LLMs are in some sense "just predicting tokens" in some sense. Likewise, human brains are in some sense "just shuttling neurotransmitters and electrical impulses around" in some sense. Neither of those tells you what the thing can actually do. To figure that out, you have to look at what it can do.

Today's best LLMs can do about as well as the best humans on problems from the International Mathematical Olympiad and occasionally solve easyish actual mathematical research problems. They write code about as well as a junior software developer (better in some ways, worse in others) but much faster. They write prose about as well as an average educated person (but with some annoying quirks that are annoying mostly because they are the same quirks over and over again).

If it pleases you to call those things "thinking" then you can. If it pleases you to call them "stochastic parroting" then you can. They are the same things either way. They are not, on the face of it, very much like "just repeating things the machine has already seen", or at least not more like that than a lot of things intelligent human beings do that we don't usually describe that way.

If you want to know whether an LLM can do some particular thing -- do your job well enough for your boss to fire you, write advertising copy that will successfully sell products, exterminate the human race, whatever -- then it's not enough to say "it's just remixing what it's seen on the internet, therefore it can't do X" unless you also have good reason to believe that that thing can't be done by just "remixing what's on the internet" (in whatever sense of "remixing" the LLM is doing that). And it's turning out that lots of things can be done that way that you absolutely wouldn't have predicted five years ago could be done that way.

It seems to me that this should make us very cautious about saying "they can't do X because all they can do is regurgitate a combination of things they've seen in training".

(My own view, not that there's any reason why anyone should care what I-in-particular think, is a combination of "what they're doing is less parroting than you might have thought" and "you can do more by parroting than you might have thought".)

So, anyway, this particular instance of the stochastic-parrot argument started when someone said: of course the AIs are yes-men, because figuring out when to agree and when not to requires actual logic and thought and the LLMs don't have either of those things.

Is it really clear that deciding whether or not to agree when someone says "I think maybe I should break up with my girlfriend" or "I've got this amazing new theory of physics that the establishment is stupidly dismissing" requires more logic and thought than, say, gold-medal performance on IMO problems? It certainly isn't clear to me. Having done a couple of International Mathematical Olympiads myself in my tragically unmisspent youth, I can assure you that solving their problems requires quite a bit of logic and thought, at least for humans. It may well be harder to give a good answer to "should I leave my job?", but it's not exactly "logic and thought" that it needs more of.

Someone reported that Claude is much less yes-man-ish than Gemini and ChatGPT. I don't know whether that's true (though it wouldn't surprise me) but: suppose it is; do you want that to oblige you to say that yes, actually, Claude really thinks logically, unlike Gemini and ChatGPT? I don't think you do. And if not, you want to avoid saying "duh, of course, you can't avoid being a yes-man without actually thinking and reasoning, and we all know that LLMs can't do those things".

I wont touch how profoundly i disagree with everything you said on reasoning (u clearly already have it figured out) but a fun test i have done with most of the big models is to give it some text input, maybe a short story, and have it rate it. That is, the prompt is, rate this from 1-10.

For Gemini and gpt, it almost always will give very similar scores for everything. As long as grammar isnt off u cannot get below a 7.

X ai on the other hand will rarely give anything above a 7.

Now when u prompt with, rate 1-10 with 5 being average, all the sudden the scores of openai and gemini drop and x ai remains roughly the same.

All of them will eventually give you a 10 if u keep making tiny edits “fixing” whatever they complain about.

Humans do not do this. Or more specifically, my experience with humans.