To the people who are against AI programming, honest question: why do you not program in assembly? Can you really say "you" "programmed" anything at all if a compiler wrote your binaries?

This is a 100% honest question. Because whatever your justification to this is, it can probably be used for AI programmers using temperature 0.0 as well, just one abstraction level higher.

I'm 100% honestly looking forward to finding a single justification that would not fit both scenarios.

I'm all for AI programming.

But I've seen this conversation on HN already 100 times.

The answer they always give is that compilers are deterministic and therefore trustworthy in ways that LLMs are not.

I personally don't agree at all, in the sense I don't think that matters. I've run into compiler bugs, and more library bugs than I can count. The real world is just as messy as LLMs are, and you still need the same testing strategies to guard against errors. Development is always a slightly stochastic process of writing stuff that you eventually get to work on your machine, and then fixing all the bugs that get revealed once it starts running on other people's machines in the wild. LLMs don't write perfect code, and neither do you. Both require iteration and testing.

  > The answer they always give is that compilers are deterministic and therefore trustworthy in ways that LLMs are not.
I don't see this as a frequent answer tbh, but I do frequently see claims that this is the critique.

I wrote much more here[0] and honestly I'm on the side of Dijkstra, and it doesn't matter if the LLM is deterministic or probabilistic

  It may be illuminating to try to imagine what would have happened if, right from the start our native tongue would have been the only vehicle for the input into and the output from our information processing equipment. My considered guess is that history would, in a sense, have repeated itself, and that computer science would consist mainly of the indeed black art how to bootstrap from there to a sufficiently well-defined formal system. We would need all the intellect in the world to get the interface narrow enough to be usable, and, in view of the history of mankind, it may not be overly pessimistic to guess that to do the job well enough would require again a few thousand years.
  - Dijkstra: On the foolishness of "natural language programming"
His argument has nothing to do with the deterministic systems[1] and all to do with the precision of the language. His argument comes down to "we invented symbolic languages for a good reason".

[0] https://news.ycombinator.com/item?id=46928421

[1] If we want to be more pedantic we can actually codify his argument more simply by using some mathematical language, but even this will take some interpretation: natural language naturally imposes a one to many relationship when processing information.

Nah bro I'll just ask the LLM to do better next time /s

It amazes me people say that serially, given in the next breath they'll complain about how their manager doesn't know shit and is leading blind. Or complain about how you don't understand. Maybe we need to install more mirrors

I just answered exactly that. I think that AI agents code better than humans and are the future.

But the parent argument is pretty bad, in my opinion.

Depends on who these humans you're comparing AI code to. I've seen and reviewed enough AI code in the last few months to have formed a solid impression that it's "ok" at best and relies heavily on who guides it - how well spec defined, what kind of rules are set, coding styles, architecture patterns.

The prompt user is basically selecting patterns from latent space. So you kind of need to know what you're looking for. When you don't know what you're looking for that's when the fun begins, but that's a problem for the next quarter.

It's true for more guided development approach. But the further you go into vibecode territory, the less you need to know.

There's a big difference between deterministic abstraction over machine code, and probabilistic translation of ambiguous language into machine code.

Compiler is your interface.

If you treat LLM as your interface... Well, I wouldn't want sharing codebase with you.

I'm not particularly against AI programming but I don't think these two things are equivilent. A compiler translates code to specifications in a deterministic way, the same compiler produces the same output from the same code, it is all completely controlled. AI is not at all deterministic, temperature is built into LLMs and furthermore the lack of specificity in prompts and our spoken languages. The difference in control is significant enough to me not to put compilers and AI coding agents into the same catagory even though they are both taking some text and producing some other text/machine code.

A chef can cook a steak better than a robo-jet-toaster, even though neither of them has raised the cow.

It's not about having abstraction levels above or below (BTW, in 21st century CPUs, the machine code itself is an abstraction over much more complex CPU internals).

It's about writing a more correct, efficient, elegant, and maintainable code at whichever abstraction layer you choose.

AI still writes messier, sloppier, buggier, more redundant code than a good programmer can when they care about the craft of writing code.

The end result is worse to those who care about the quality of code.

We mourn, because the quality we paid so much attention to is becoming unimportant compared to the sheer quantity of throwaway code that can be AI-generated.

We're fine dining chefs losing to factory-produced junk food.

I know how to review code without looking at the corresponding assembly and have high confidence in the behavior of the final binary. I can't quite say the same for a prompt without looking at the generated code, even with temperature 0. The difference is explainability, not determinism.

Even if you are not coding in assembly you still need to think. Replace llm with a smart programmer. I don't like the other guy to do all the thinking for me. Much better if it's a collaborative process even if the other guy could have coded the perfect solution without my help. Like otherwise why am I even in the picture?

[deleted]

Compilers are deterministic.

There is no requirement for compilers to be deterministic. The requirement is that a compiler produces something that is valid interpretation of the program according to the language specification, but unspecified details (like specific ordering of instructions in the resulting code) could in principle be chosen nondeterministically and be different in separate executions of the compiler.

We are not talking about deterministic instructions, we are talking about deterministic behavior.

UB is actually a big deal, and is avoided for a reason.

I couldn't in my life guess what CC would do in response to "implement login form". For all I know CC's response could depend on time of day or Anthropic's electricity bill last month more, than on the existing code in my app, and the specific wording I use.

For me, the whole goal is to achieve Understanding: understanding a complex system, which is the computer and how it works. The beauty of this Understanding is what drives me.

When I write a program, I understand the architecture of the computer, I understand the assembly, I understand the compiler, and I understand the code. There are things that I don't understand, and as I push to understand them, I am rewarded by being able to do more things. In other words, Understanding is both beautiful and incentivized.

When making something with an LLM, I am disincentivized from actually understanding what is going on, because understanding is very slow, and the whole point of using AI is speed. The only time when I need to really understand something is when something goes wrong, and as the tool improves, this need will shrink. In the normal and intended usage, I only need to express a desire to achieve a result. Now, I can push against the incentives of the system. But for one, most people will not do that at all; and for two, the tools we use inevitably shape us. I don't like the shape into which these tools are forming me - the shape of an incurious, dull, impotent person who can only ask for someone else to make something happen for me. Remember, The Medium Is The Message, and the Medium here is, Ask, and ye shall receive.

The fact that AI use leads to a reduction in Understanding is not only obvious, but also studies have shown the same. People who can't see this are refusing to acknowledge the obvious, in my opinion. They wouldn't disagree that having someone else do your homework for you would mean that you didn't learn anything. But somehow when an LLM tool enters the picture, it's different. They're a manager now instead of a lowly worker. The problem with this thinking is that, in your example, moving from say Assembly to C automates tedium to allow us to reason on a higher level. But LLMs are automating reasoning itself. There is no higher level to move to. The reasoning you do now while using AI is merely a temporary deficiency in the tool. It's not likely that you or I are the .01% of people who can create something truly novel that is not already sufficiently compressed into the model. So enjoy that bit of reasoning while you can, o thou Man of the Gaps.

They say that writing is God's way of showing you how sloppy your thinking is. AI tools discourage one from writing. They encourage us to prompt, read, and critique. But this does not result in the same Understanding as writing does. And so our thinking will be, become, and remain vapid, sloppy, inarticulate, invalid, impotent. Welcome to the future.

  > why do you not program in assembly?
There's a balance of levels of abstraction. Abstraction is a great thing. Abstraction can make your programs faster, more flexible, and more easy to understand. But abstraction can also make your programs slower, more brittle, and incomprehensible.

The point of code is to write specification. That is what code is. The whole reason we use a pedantic and somewhat cryptic schema is that natural language is too abstract. This is the exact reason we created math. It really is even the same reason we created things like "legalese".

Seriously, just try a simple exercise and be adversarial to yourself. Describe how to do something and try to find loopholes. Malicious compliance. It's hard, to defend and writing that spec becomes extremely verbose, right? Doesn't this actually start to become easier by using coding techniques? Strong definitions? Have we not all forgotten the old saying "a computer does exactly what you tell it to, not what you intend to tell it to do"? Vibe coding only adds a level of abstraction to that. It becomes "a computer does what it 'thinks' you are telling it to do, not what you intend to tell it to do". Be honest with yourself, which paradigm is easier to debug?

Natural language is awesome because the abstraction really compresses concepts, but it requires inference of the listener. It requires you to determine what the speaker intends to say rather than what the speaker actually says.

Without that you'd have to be pedantic to even describe something as mundane as making a sandwich[1]. But inference also leads to misunderstandings and frankly, that is a major factor of why we talk past one another when talking on large global communication systems. Have you never experienced culture shock? Never experienced where someone misinterprets you and you realize that their interpretation was entirely reasonable?[2] Doesn't this knowledge also help resolve misunderstandings as you take a step back and recheck assumptions about these inferences?

  > using temperature 0.0
Because, as you should be able to infer from everything I've said above, the problem isn't actually about randomness in the system. Making the system deterministic only has one realistic outcome: a programming language. You're still left with the computer doing what you tell it to do, but have made this more abstract. You've only turned it into the PB&J problem[1] and frankly, I'd rather write code than instructions like those kids are. Compared to the natural language the kids are using, code is more concise, easier to understand, more robust, and more flexible.

I really think Dijkstra explains things well[0]. (I really do encourage reading the entire thing. It is short and worth the 2 minutes. His remark at the end is especially relevant in our modern world where it is so easy to misunderstand one another...)

  The virtue of formal texts is that their manipulations, in order to be legitimate, need to satisfy only a few simple rules; they are, when you come to think of it, an amazingly effective tool for ruling out all sorts of nonsense that, when we use our native tongues, are almost impossible to avoid.

  Instead of regarding the obligation to use formal symbols as a burden, we should regard the convenience of using them as a privilege: thanks to them, school children can learn to do what in earlier days only genius could achieve. 
[0] https://www.cs.utexas.edu/~EWD/transcriptions/EWD06xx/EWD667...

[1] https://www.youtube.com/watch?v=FN2RM-CHkuI

[2] Has this happened to you and you've been too stubborn to realize the interpretation was reasonable?

[dead]