> I don't presume this will immediately change your mind

I'm not saying that AI isn't useful. I'm just claiming it's not analogous to a compiler. If it was, you would treat your prompts as source code, and check them into source control. Checking the output of an LLM into source control is analogous to committing the machine code output from a compiler into source control.

My question still stands though. What does it mean for a tool to be reliable when the input language is ambiguous? This isn't just about the LLM being nondeterministic. At some point those ambiguities need to be resolved, either by the prompter, or the LLM. But the resolution to those ambiguities doesn't exist in the original input.