This argument isn't particularly compelling in my opinion.

I don't actually like the stochastic parrot argument either to be fair.

I feel like the author is ignoring the various knobs (randomization factors may be a better term) applied to the models during inference that are tuned specifically to make the output more believable or appealing.

Turn the knobs too far and the output is unintelligible garbage. Don't then them far enough and the output feels very robotic or mathematical, its obvious that the output isn't human. The other risk of not turning the knobs far enough would be copyright infringement, but I don't know if that happens often in practice.

Claiming that LLMs aren't stochastic parrots without dealing with the fact that we forced randomization factors into the mix misses a huge potential argument that they are just cleverly disguised stochastic parrots.

This seems like it was inevitable. Most people do not understand the meaning of the word "stochastic" and so they're likely to simply ignore it in favour of reading the term as "_____ parrot."

What you have described, a probability distribution with carefully-tuned parameters, is perfectly captured by the word stochastic as it's commonly used by statisticians.

Human brains are similarly finely tuned and have similar knobs, it seems to me. People with no short term memory have the same conversations over and over again. Drunk people tend to be very predictable. There are circuits that give us an overwhelming sense of impending doom, or euphoria, or the conviction that our loved ones have been replaced by imposters. LLMs with very perturbed samplers bear, sometimes, a striking resemblance to people on certain mind-altering substances.

And that's really a core of the problem, we don't well understand how the human mind works and we can't really define or identify "intelligence."

I mentioned I don't like the stochastic parrot argument, and that I find this article's argument lacking. Both are for the same reason, the arguments are making claims that we simply can't make while missing the fundamental understanding of what intelligence really is and how human (and other animals) brains work.

Yes, this really seems like an argument between two contrived straw people at the absolute extremes.