> Certainly if we ever ask a hallucination engine for a numeric answer, we should ask it at least three times, so we get some sense of the variation.
This works on people as well!
Cops do this when interrogating. You tell the same story three times, sometimes backwards. It's hard to keep track of everything if you're lying or you don't recall clearly so you can get a sense of confidence. Also works on interviews, ask them to explain a subject in three different ways to see if they truly understand.
It works to confuse people and make them sound like they’re lying when they’re not, too. Gotta be careful with this.
If you're trying to hit quotas rather than find out the truth, that sounds like a feature.
> This works on people as well!
Only within certain conditions or thresholds that we're still figuring out. There are many cases where the more someone recalls and communicates their memory, the more details get corrupted.
> Cops do this when interrogating.
Sometimes that's not to "get sense of the variation" but to deliberately encourage a contradiction to pounce upon it. Ask me my own birthday enough times in enough ways and formats, and eventually I'll say something incorrect.
Care must also be taken to ensure that the questioner doesn't change the details, such as by encouraging (or sometimes forcing) the witness/suspect to imagine things which didn't happen.
Triple modular redundancy. I remember reading that's how Nasa space shuttles calculate things because a processor / memory might have been affected by space radiation https://llis.nasa.gov/lesson/18803
Triple redundancy works because you know that under nominal conditions each computer would produce the correct result independently. If 2 out of 3 computers agree, you have high confidence that they are correct and the 3rd one isn’t.
With LLMs you have no such guarantee or expectation.
Who remembers that scene on Better Call Saul between Lalo, Saul, and Kim?