You should be careful with ideas like "sufficiently smart LLM" - quotes and all. There's no intelligence here, just next token prediction. And the idea of an LLM being self-aware is ludicrous. Ask one what the difference between hallucinations and lying is and get a list similar to this why the LLM isn't lying:

- No intent, beliefs, or awareness

- No concept of “know” truth vs. falsehood

- A byproduct of how it predicts text based on patterns

- Arises from probabilistic text generation

- A model fills gaps when it lacks reliable knowledge

- Errors often look confident because the system optimizes for fluency, not truth

- Produces outputs that statistically resemble true statements

- Not an agent, no moral responsibility

- Lacks “committment” to a claim unless specifically designed to track it

It was just a reference to the mythical "sufficiently smart compiler". The point is that, in practice, it doesn't exist.

https://wiki.c2.com/?SufficientlySmartCompiler