> if I had written it completely by myself from the start, I would have finished the project in the same amount of time but I’d understand the details far better.
I believe the argument from the other camp is that you don't need to understand the code anymore, just like you don't need to understand the assembly language.
Of all the points the "other side" makes, this one seems the most incoherent. Code is deterministic, AI isn’t. We don’t have to look at assembly, because a compiler produces the same result every time.
If you only understand the code by talking to AI, you would’ve been able to ask AI “how do we do a business feature” and ai would spit out a detailed answer, for a codebase that just says “pretend there is a codebase here”. This is of course an extreme example, and you would probably notice that, but this applies at all levels.
Any detail, anywhere cannot be fully trusted. I believe everyone’s goal should be to prompt ai such that code is the source of truth, and keep the code super readable.
If ai is so capable, it’s also capable of producing clean readable code. And we should be reading all of it.
“Of all the points the other side makes, this one seems the most incoherent. Code is deterministic, AI isn’t. We don’t have to look at assembly, because a compiler produces the same result every time.”
This is a valid argument. However, if you create test harnesses using multiple LLMs validating each other’s work, you can get very close to compiler-like deterministic behavior today. And this process will improve over time.
It helps, but it doesn't make it deterministic. LLMs could all be misled together. A different story would be if we had deterministic models, where the exact same input always results in the exact same output. I'm not sure why we don't try this tbh.
I've been wondering if there are better random seeds, like how there are people who hunt for good seeds in Minecraft
it's literally just setting T=0. except they are not as creative then. they don't explore alternative ideas from the mean.
Are you sure that it’s T=0. My comment’s first draft said “it can’t just be setting temp to zero can it?” But I felt like T is not enough. Try running the same prompt in new sessions with T=0, like “write a poem”. Will it produce the same poem each time? (I’m not where I can try it currently).
> just add more magic turtles to the stack, bro
You're just amplifying hallucination and bias.
> other side???
> We don’t have to look at assembly, because a compiler produces the same result every time.
This is technically true in the narrowest possible sense and practically misleading in almost every way that matters. Anyone who's had a bug that only manifests at -O2, or fought undefined behavior in C that two compilers handle differently, or watched MSVC and GCC produce meaningfully different codegen from identical source, or hit a Heisenbug that disappears when you add a printf ... the "deterministic compiler" is doing a LOT of work in that sentence that actual compilers don't deliver on.
Also what's with the "sides" and "camps?" ... why would you not keep your identity small here? Why define yourself as a {pro, anti} AI person so early? So weird!
You just described deterministic behavior. Bugs are also deterministic. You don’t get different bugs every time you compile the same code the same way. With LLMs you do.
Re: “other side” - I’m quoting the grandparent’s framing.
GCC is, I imagine, several orders of magnitude mor deterministic than an LLM.
It’s not _more_ deterministic. It’s deterministic, period. The LLMs we use today are simply not.
Build systems may be deterministic in the narrow sense you use, but significant extra effort is required to make them reproducible.
Engineering in the broader sense often deals with managing the outputs of variable systems to get known good outcomes to acceptable tolerances.
Edit: added second paragraph
I'm not using a narrow sense. There is no elasticity here. See https://en.wikipedia.org/wiki/Deterministic_system
> significant extra effort is required to make them reproducible.
Zero extra effort is required. It is reproducible. The same input produces the same output. The "my machine" in "Works on my machine" is an example of input.
> Engineering in the broader sense often deals with managing the outputs of variable systems to get known good outcomes to acceptable tolerances.
You can have unreliable AIs building a thing, with some guidance and self-course-correction. What you can't have is outcomes also verified by unreliable AIs who may be prompt-injected to say "looks good". You can't do unreliable _everything_: planning, execution, verification.
If an AI decided to code an AI-bound implementation, then even tolerance verification could be completely out of whack. Your system could pass today and fail tomorrow. It's layers and layers of moving ground. You have to put the stake down somewhere. For software, I say it has to be code. Otherwise, AI shouldn't build software, it should replace it.
That said, you can build seemingly working things on moving ground, that bring value. It's a brave new world. We're yet to see if we're heading for net gain or net loss.
If we want to get really narrow I'd say real determinism is possible only in abstract systems, to which you'd reply it's just my ignorance of all possible factors involved and hence the incompleteness of the model. To which I'd point of practical limitations involved with that. And that reason, even though it is incorrect and I don't use it in this way, I understand why some people are using the quantifiers more/less with the term "deterministic", probably for the lack of a better construct.
I don't think I'm being pedantic or narrow. Cosmic rays, power spikes, and falling cows can change the course of deterministic software. I'm saying that your "compiler" either has intentionally designed randomness (or "creativity") in it, or it doesn't. Not sure why we're acting like these are more or less deterministic. They are either deterministic or not inside normal operation of a computer.
That will never happen unless we figure out a far simpler way to prove the system does what it should. If you've ever had bugs crop up with a full test suite you should know this is incredibly hard to do
LLMs can't read your mind. In the end they're always taking the english prompt and making a bunch of fill in the blank assumptions around it. This is inevitable if we're to get any productivity improvements out of them.
Sometimes it's obvious and we can catch the assumptions we didn't want (the div isn't centered! fix it claude!) and sometimes you actually have to read and understand the code to see that it's not going to do what you want under important circumstances
If you want a 100% perfect communication of the system in your mind, you should use a terse language built for it: that's called code. We'd just write the code instead
we can do both. we can write code for the parts where it matters and let the LLM code the parts that aren't as critical.
People who really care about performance still do look at the assembly. Very few people write assembly anymore, a larger number do look at assembly every so often. It’s still a minority of people though.
I guess it would be similar here: a small few people will hand write key parts of code, a larger group will inspect the code that’s generated, and a far larger group won’t do either. At least if AI goes the way that the “other side” says.
>I believe the argument from the other camp is that you don't need to understand the code anymore
Then what stops anyone who can type in their native language to, ultimately when LLM's are perfected, just order their own software instead of using anybody else's (speaking about native apps like video games, mobile phones, desktop, etc.)?
Do they actually believe we'll need a bachelor's degree to prompt program in a world where nobody cares about technical details, because the LLM's will be taking care of? Actually, scratch that. Why would the companies who're pouring gorrilions of dollars in investment even give access to such power in an affordable way?
The deeper I look in the rabbit hole they think we're walking towards the more issues I see.
At least for me, the game-changer was realizing I could (with the help of AI) write a detailed plan up front for exactly what the code would be, and then have the AI implement it in incremental steps.
Gave me way more control/understanding over what the AI would do, and the ability to iterate on it before actually implementing.
Indeed. This is very much the way I use it at work. Present an idea of a design, iterate on it, then make a task/todo list and work through the changes piecemeal, reviewing and committing as I go. I find pair design/discussion practical here too. I expect to see smaller teams working like this in the future.
For small personal projects, it's more vibey.. eg. Home automation native UIs & services for Mac & Windows, which I wouldn't otherwise start.. more itches that can be scratched in my limited time.
quite a bit of software you would need to understand the assembly. not everything is web-services.
I've found LLMs (since Opus 4.5) exceptionally good at reading and writing and debugging assembly.
Give them gdb/lldb and have your mind blown!
Do you mean gdb batch mode (which I've heard of others using with LLMs), or the LLM using gdb interactively ?
I've only needed assembly once in more than 20 years of programming, not a webdev
It was during university to get access to CPU counters for better instrumenting, like 15 years ago. Havent needed it since