Yes, but determinism != ambiguity, because determinism means: for this exact input the same exact output needs to follow.

If I ask the same model the same question I should be able to deterministically get the same answer.

Now if we phrase the same question slightly differently we would expect to get a slightly different answer.

> Now if we phrase the same question slightly differently we would expect to get a slightly different answer.

You wouldn't get this from an LLM though, a tiny change in starting point gets a massive change in output, its a chaotic system.

Maybe predictability is what is meant?

Me: What’s an example of a dice roll?

LLM: 1

“Language ambiguity with determinism”? Sure I can juxtapose the terms but if it’s semantically inconsistent, then what we mean by that is not a deterministic, definitive thing. You’re chasing your tail on this ‘goal’.

Ambiguity: The request/prompt leaves a lot of room for interpretation. Many qualitatively different answers may be correct, relative to the prompt. Different or non-deterministic models will return highly variance results.

Determinism: If a model is given the exact same request/prompt twice, its two responses will also be identical. Whether or not the consistent response qualifies as correct.

The two concepts are very different.

(Ambiguous vs. precise prompt) x (Deterministic vs. Non-deterministic model) = 4 different scenarios.

A model itself can be non-deterministic without being ambiguous. If you know exactly how it functions, why it is non-deterministic (batch sensitive for instance), that is not an ambiguous model. Its operation is completely characterized. But it is non-deterministic.

An ambiguous model would simply be model whose operation was not characterized. A black box model for instance. A black box model can be deterministic and yet ambiguous.

Maybe I got this wrong but I thought ambiguity refered to the input. So in a deterministic system I would assume that a input of "Give an example of a dice roll" Will always output the exact same example (unless the model also gets the context of the message history).

Ambiguity is what happens when you change the prompt slightly, e.g. by adding a word: "Give an example of a single dice roll". Now as a human our expectation would be that this is the same question and should thus (in a deterministic system) receive the same answer. But to an LLM it may not be.

> Ambiguity: [...] Different or non-deterministic models will return highly variance results.

Yes, and thanks. That was my intended point - but you point out a better example. Slightly different prompts may also produce highly varied responses.

(My subsequent comments on ambiguous models was in case I was misinterpreting the comment I was replying to. I also generally think of ambiguity as a property of input. Either way, ambiguity is not the same as non-deterministic.)

If you really want that to work while being reproducible, maybe give it a random number tool and set the seed?

> LLM: 1

A perfectly acceptable answer.

If it answers 1 every time it's still a perfectly acceptable answer.

So is ‘2’ or ‘3’ or ‘19’ or ‘99’ or ‘a jam sponge cake with gaming dice for frosting’… The point is in natural language there are many perfectly acceptable answers. Usually any particular answer is arbitrary, and it would probably be undesirable to have the same answer everytime. For a majority of use cases.