But that response is grounded in the training data they've seen, so it's not entirely unreasonable to think their answer might provide actual insights, not just statistical parroting.
What do you mean? It is grounded on the text it is fed, the reason it said that was that humans have said that or something similar to it, not because it analyzed a lot of LLM information and thought up that answer itself.
LLM can "think" but that requires a lot of tokens to do, all quick answers are just human answers or answers it was fed with some basic pattern matching / interpolation.
But that response is grounded in the training data they've seen, so it's not entirely unreasonable to think their answer might provide actual insights, not just statistical parroting.
What do you mean? It is grounded on the text it is fed, the reason it said that was that humans have said that or something similar to it, not because it analyzed a lot of LLM information and thought up that answer itself.
LLM can "think" but that requires a lot of tokens to do, all quick answers are just human answers or answers it was fed with some basic pattern matching / interpolation.
There's nothing "basic" about the several months of training used to create a frontier model.
That's a very pedantic response because either way the model cannot see or analyze the training data when it responds.
They have some ability; also, you could give them tools to do it.
https://www.anthropic.com/research/introspection