> Can I tell you one more thing from your X,Y,Z results which is most doctors miss?
I absolutely hate this influencer-ish behavior. If there's something most people miss just state it. That's why I'm using the assistant.
This form of dialogue is a big part of why I use GPT less now.
> If there's something most people miss just state it.
But the LLM suggesting a question doesn't mean it has a good answer to converge to.
If you actually ask, the model probabilities will be pressured to come up with something, anything, to follow up on the offer, which will be nonsense if there actually weren't anything else to add.
I've seen this pattern fail a lot on roleplay (e.g. AI Dungeon) so I really dislike it when LLMs end with a question. A "sufficiently smart LLM" would have enough foresight to know it's writing itself into a dead end.
You should be careful with ideas like "sufficiently smart LLM" - quotes and all. There's no intelligence here, just next token prediction. And the idea of an LLM being self-aware is ludicrous. Ask one what the difference between hallucinations and lying is and get a list similar to this why the LLM isn't lying:
- No intent, beliefs, or awareness
- No concept of “know” truth vs. falsehood
- A byproduct of how it predicts text based on patterns
- Arises from probabilistic text generation
- A model fills gaps when it lacks reliable knowledge
- Errors often look confident because the system optimizes for fluency, not truth
- Produces outputs that statistically resemble true statements
- Not an agent, no moral responsibility
- Lacks “committment” to a claim unless specifically designed to track it
It was just a reference to the mythical "sufficiently smart compiler". The point is that, in practice, it doesn't exist.
https://wiki.c2.com/?SufficientlySmartCompiler