Wait sorry how did you use and expose seeds? That’s the most interesting part of your post

We were not a ChatGPT wrapper; we used a finetuned open-source model running on our own hardware, so we naturally had full control of the input parameters. I apologize if my language was ambiguous, but by "expose seeds" I simply meant users can see the seed used for each prompt and input their own in the UI, rather than "exposing secrets" of the frontier LLM APIs, if that's what you took it to mean.

I just wanted deterministic outputs and was curious how you were doing it. Sounds like probably temp = 0, which major providers no longer offer. Thanks for your response.

No, seed and temperature are separate parameters accepted by the inference engine. You can still get deterministic outputs with high temp if you're using the same seed, provided the inference engine itself operates in a deterministic manner, and the hardware is deterministic (in testing, we did observe small non-deterministic variations when running the same prompt on the same stack but a different model of GPU).