> There is actual research suggesting concise prompting can reduce response length substantially without always wrecking quality,
Anecdote: i discussed that with an LLM once and it explained to me that LLMs tend to respond to terse questions with terse answers because that's what humans (i.e. their training data) tend to do. Similarly, it explained to me that polite requests tend to lead to LLM responses with _more_ information than a response strictly requires because (again) that's what their training data suggests is correct (i.e. because that's how humans tend to respond).
TL;DR: how they are asked questions influences how they respond, even if the facts of the differing responses don't materially differ.
(Edit: Seriously, i do not understand the continued down-voting of completely topical responses. It's gotten so bad i have little choice but to assume it's a personal vendetta.)
LLMs don't understand what they are doing, they can't explain it to you, it's just creating a reasonable sounding response
But that response is grounded in the training data they've seen, so it's not entirely unreasonable to think their answer might provide actual insights, not just statistical parroting.
What do you mean? It is grounded on the text it is fed, the reason it said that was that humans have said that or something similar to it, not because it analyzed a lot of LLM information and thought up that answer itself.
LLM can "think" but that requires a lot of tokens to do, all quick answers are just human answers or answers it was fed with some basic pattern matching / interpolation.
There's nothing "basic" about the several months of training used to create a frontier model.
That's a very pedantic response because either way the model cannot see or analyze the training data when it responds.
They have some ability; also, you could give them tools to do it.
https://www.anthropic.com/research/introspection
> i discussed that with an LLM once and it explained to me that LLMs...
Do you have any idea how dumb this sounds?
Do you? I have the same knee-jerk reaction, but if you think about for more than 2 seconds, LLMs at this point have, through training, read much more research about LLMs than any human, so actually, it's not a dumb thing to do. It may not be very current, though.
> read much more research about LLMs than any human
How long a response is from an LLM is going to be completely individual based on the system prompt and the model itself. You can read all of the "LLM research" in the world and it's not going to give you a correct generalized answer about this topic. It's not like this is some inherent property of LLMs.
FWIW, they also wrote down something that's so obvious you don't have to know much about LLMs to know that it's true. Even the "stochastic parrot" / "glorified Markov chain" / "regurgitation machine" camps people should be on the same page - LLMs are trained on human communication, and in human communications, longer queries, good manners and correct grammar are associated with longer, more correct and quality responses; correctly, shitposting is associated with shitposts in reply.
That much is, again, obvious. My previous comment was addressing your ridiculing the notion of discussing LLMs with LLMs, which was a fair reaction back in GPT-3.5 era, but not so today.
And yet what you are saying just isn't true in my experience.
I use speech to text with Claude Code and other LLMs and often have terrible grammar and lots of typos and stuff and it never affects the output. But if I go by what you are saying then it would only seem right that the code it outputs is more sloppy? Also the length of a response entirely depends on what I'm using for example ChatGPT always gives me a long response no matter what I ask it and the Claude app always gives short responses unless I specifically ask for something longer. This is because of how they are given instructions and is not inherent to LLMs.
this continual down-voting is not a personal thing for sure. perhaps there are crawlers that pretend to be more humane, or fully automated llm commenters which also randomly downvote.
Instead of conspiracy theories don't you think it's just likely that it was people downvoting a stupid comment?
[dead]