No? There's no model involved. It's all just probabilistic. LLMs understand what you're thinking as well as a mood ring.

It isn't possible to have "just probabilistic" (maybe a philosophical exception could be made for a uniform random distribution or whatever provides the little dose of randomness required to get nondeterministic results). Probabilities are always in context of a model. LLMs model language but language itself is a model of something else. My money would have been on language modelling nonsense, but that is quite clearly not the case. Turns out it models the world and so do LLMs.

The model is the thing which is learned in order to make the probabilistic prediction with low entropy.

The literal definition of a model is "an informative representation of an object, person, or system". I think you mean something else though, what are you trying to express exactly?

Nothing about an LLM is “just”. In what precise sense do you mean it is probabilistic?

There's a reason stochastic was used in the original phrase instead of "probabilistic."

While most inference executions are intentionally non-deterministic, even a purely deterministic one would still be stochastic in that the model itself was built in a process such that the statistical frequency, sequencing, etc of the training text and followup processes all heavily influence the result.

Because of that, the output is the sort of thing that is not expected to generate 100% perfect output 100% of the time, but to have a good probability of being like-in-kind-to-the-training-data (and useful/relevant as a result).

(As compared to a non-stochastic model, like arithmetic on integers, where 2+2 is always gonna be 4 and you don't have a chance of coming up with some novel pair of inputs to addition that will cause your arithmetic to miss the mark.)