>> The whole point is that you can't 100% trust the LLM to infer your intent with accuracy from lossy natural language.
You can't 100% trust a human either.
But, as with self-driving, the LLM simply needs to be better. It does not need to be perfect.
> You can't 100% trust a human either.
We do have a system of checks and balances that does a reasonable job of it. Not everyone in position of power is willing to burn their reputation and land in jail. You don't check the food at the restaurant for poison, nor check the gas in your tank if it's ok. But you would if the cook or the gas manufacturer was as reliable as current LLMs.
Good analogy