> All processes in reality, everywhere, are probablistic.

If we want to go in philosophy then sure, you're correct, but this not what we're saying.

For example, an LLM is capable (and it's highly plausible for it to do so) of creating a reference to a non-existent source. Humans generally don't do that when their goal is clear and aligned (hence deterministic).

> Building a process to get a similar confidence in LLM output is part of the game.

Which is precisely my point. LLMs are supposed to be better than humans. We're (currently) shoehorning the technology.

> Humans generally don't do that when their goal is clear and aligned (hence deterministic).

Look at the language you're using here. Humans "generally" make less of these kinds of errors. "Generally". That is literally an assessment of likelihood. It is completely possible for me to hire someone so stupid that they create a reference to a non-existent source. It's completely possible for my high IQ genius employee who is correct 99.99% of the time to have an off-day and accidentally fat finger something. It happens. Perhaps it happens at 1/100th of the rate that an LLM would do it. But that is simply an input to the model of the process or system I'm trying to build that I need to account for.

When humans make mistakes repeatedly in their job they get fired.