I got a chuckle the last time I used Claude's /insights command. The number one thing in the report was, "User frequently stops processing to provide corrections." ;-)
Trouble is an LLM can test for something being logical in isolation, or coherent unto itself. It’s much weaker at anticipating what will be meaningful to other people which is usually what people are actually looking for.
I got a chuckle the last time I used Claude's /insights command. The number one thing in the report was, "User frequently stops processing to provide corrections." ;-)
I just tell a new instance and a different provider the core idea and see if they like it too
Trouble is an LLM can test for something being logical in isolation, or coherent unto itself. It’s much weaker at anticipating what will be meaningful to other people which is usually what people are actually looking for.