> other side???
> We don’t have to look at assembly, because a compiler produces the same result every time.
This is technically true in the narrowest possible sense and practically misleading in almost every way that matters. Anyone who's had a bug that only manifests at -O2, or fought undefined behavior in C that two compilers handle differently, or watched MSVC and GCC produce meaningfully different codegen from identical source, or hit a Heisenbug that disappears when you add a printf ... the "deterministic compiler" is doing a LOT of work in that sentence that actual compilers don't deliver on.
Also what's with the "sides" and "camps?" ... why would you not keep your identity small here? Why define yourself as a {pro, anti} AI person so early? So weird!
You just described deterministic behavior. Bugs are also deterministic. You don’t get different bugs every time you compile the same code the same way. With LLMs you do.
Re: “other side” - I’m quoting the grandparent’s framing.
GCC is, I imagine, several orders of magnitude mor deterministic than an LLM.
It’s not _more_ deterministic. It’s deterministic, period. The LLMs we use today are simply not.
Build systems may be deterministic in the narrow sense you use, but significant extra effort is required to make them reproducible.
Engineering in the broader sense often deals with managing the outputs of variable systems to get known good outcomes to acceptable tolerances.
Edit: added second paragraph
I'm not using a narrow sense. There is no elasticity here. See https://en.wikipedia.org/wiki/Deterministic_system
> significant extra effort is required to make them reproducible.
Zero extra effort is required. It is reproducible. The same input produces the same output. The "my machine" in "Works on my machine" is an example of input.
> Engineering in the broader sense often deals with managing the outputs of variable systems to get known good outcomes to acceptable tolerances.
You can have unreliable AIs building a thing, with some guidance and self-course-correction. What you can't have is outcomes also verified by unreliable AIs who may be prompt-injected to say "looks good". You can't do unreliable _everything_: planning, execution, verification.
If an AI decided to code an AI-bound implementation, then even tolerance verification could be completely out of whack. Your system could pass today and fail tomorrow. It's layers and layers of moving ground. You have to put the stake down somewhere. For software, I say it has to be code. Otherwise, AI shouldn't build software, it should replace it.
That said, you can build seemingly working things on moving ground, that bring value. It's a brave new world. We're yet to see if we're heading for net gain or net loss.
If we want to get really narrow I'd say real determinism is possible only in abstract systems, to which you'd reply it's just my ignorance of all possible factors involved and hence the incompleteness of the model. To which I'd point of practical limitations involved with that. And that reason, even though it is incorrect and I don't use it in this way, I understand why some people are using the quantifiers more/less with the term "deterministic", probably for the lack of a better construct.
I don't think I'm being pedantic or narrow. Cosmic rays, power spikes, and falling cows can change the course of deterministic software. I'm saying that your "compiler" either has intentionally designed randomness (or "creativity") in it, or it doesn't. Not sure why we're acting like these are more or less deterministic. They are either deterministic or not inside normal operation of a computer.