> read much more research about LLMs than any human
How long a response is from an LLM is going to be completely individual based on the system prompt and the model itself. You can read all of the "LLM research" in the world and it's not going to give you a correct generalized answer about this topic. It's not like this is some inherent property of LLMs.
FWIW, they also wrote down something that's so obvious you don't have to know much about LLMs to know that it's true. Even the "stochastic parrot" / "glorified Markov chain" / "regurgitation machine" camps people should be on the same page - LLMs are trained on human communication, and in human communications, longer queries, good manners and correct grammar are associated with longer, more correct and quality responses; correctly, shitposting is associated with shitposts in reply.
That much is, again, obvious. My previous comment was addressing your ridiculing the notion of discussing LLMs with LLMs, which was a fair reaction back in GPT-3.5 era, but not so today.
And yet what you are saying just isn't true in my experience.
I use speech to text with Claude Code and other LLMs and often have terrible grammar and lots of typos and stuff and it never affects the output. But if I go by what you are saying then it would only seem right that the code it outputs is more sloppy? Also the length of a response entirely depends on what I'm using for example ChatGPT always gives me a long response no matter what I ask it and the Claude app always gives short responses unless I specifically ask for something longer. This is because of how they are given instructions and is not inherent to LLMs.