I see this comparison made constantly and for me it misses the mark.

When you use abstractions you are still deterministically creating something you understand in depth with individual pieces you understand.

When you vibe something you understand only the prompt that started it and whether or not it spits out what you were expecting.

Hence feeling lost when you suddenly lose access to frontier models and take a look at your code for the first time.

I’m not saying that’s necessarily always bad, just that the abstraction argument is wrong.

I think it's more: when I don't have access to a compiler I am useless. It's better to go for a walk than learn assembly. AI agents turn our high-level language into code, with various hints, much like the compiler.

If my compiler "went down" I could still think through the problem I was trying to solve, maybe even work out the code on paper. I could reach a point where I would be fairly confident that I had the problem solved, even though I lacked the ability to actually implement the solution.

If my LLM goes down, I have nothing. I guess I could imagine prompts that might get it to do what I want, but there's no guarantee that those would work once it's available again. No amount of thought on my part will get me any closer to the solution, if I'm relying on the LLM as my "compiler".

What stops you from thinking through the problem if an LLM goes down, as you still have its previously produced code in front of you? It's worse if a compiler goes down because you can't even build the program to begin with.

In my opinion, this sort of learned helplessness is harmful for engineers as a whole.

Yeah I actually find writing the prompt itself to be such a useful mechanism of thinking through problems that I will not-infrequently find myself a couple of paragraphs in and decide to just delete everything I've written and take a new tack. Only when you're truly outsourcing your thinking to the AI will you run into the situation that the LLM being down means you can't actually work at all.

An interesting element here, I think, is that writing has always been a good way to force you to organize and confront your thoughts. I've liked working on writing-heavy projects, but often in fast-moving environments writing things out before coding becomes easy to skip over, but working with LLMs has sort of inverted that. You have to write to produce code with AI (usually, at least), and the more clarity of thought you put into the writing the better the outcomes (usually).

Why couldn’t you actually write out the documents and think through the problem? I think my interaction is inverted from yours. I have way more thinking and writing I can do to prep an agent than I can a compiler and it’s more valuable for the final output.

I think if you're vibe coding to the extent that you don't even know the shapes of data your system works with (e.g. the schema if you use a database) you might be outsourcing a bit too much of your thinking.

This. When compilers came along, I believe a bunch of junior engineers just gave up utterly on understanding the shape of how the code was generated in assembly which was a mistake given early compilers weren't as effective as they are today. Today vibe-coders are using these early AI tooling and giving up on understanding the shape, and similarly struggling.

If your compiler produced working executable 20% of the time this would be an apt comparison.

Still misses the mark. You aren’t useless in the same way because you are still in control of reasoning about the exact code even if you never actually write it.

Compilers are deterministic, LLMs are not. They are not "much like".

The difference is that there is a company that can easily take your agents away from you.

Installed on your machine vs. cloud service that's struggling to maintain capacity is an unfair comparison...

> you are still deterministically creating something you understand in depth with individual pieces you understand

You’re overestimating determinism. In practice most of our code is written such that it works most of the time. This is why we have bugs in the best and most critical software.

I used to think that being able to write a deterministic hello world app translates to writing deterministic larger system. It’s not true. Humans make mistakes. From an executives point of view you have humans who make mistakes and agents who make mistakes.

Self driving cars don’t need to be perfect they just need to make fewer mistakes.

Bugs are not non-determinism. There’s a huge difference between writing buggy code and having no idea what the code even looks like.

"When you use abstractions you are still deterministically creating something you understand in depth with individual pieces you understand."

I always thought the point of abstraction is that you can black-box it via an interface. Understanding it "in depth" is a distraction or obstacle to successful abstraction.

[deleted]

> When you use abstractions you are still deterministically creating something you understand in depth with individual pieces you understand

Hard disagree on that second part. Take something like using a library to make an HTTP call. I think there are plenty of engineers who have more than a cursory understanding of what's actually going on under the hood.

It might just be social. When I use the open source http library, much of the reason I use it is because someone has put in the work of making sure it actually works across a diverse set of software and hardware platforms, catching common dumb off by ones, etc.

Sure, the LLM theoretically can write perfect code. Just like you could theoretically write perfect code. In real life though, maintenance is a huge issue

Perhaps then, the better analogy is like being promoted at your company and having people under you doing the grunt work.

How closely you micromanage it is a factor as well though

This is how I’ve come to think of it. Delegation of the details.

It seems like some kind of technique is needed that maximizes information transfer between huge LLM generated codebases and a human trying to make sense of them. Something beyond just deep diving into the codebase with no documentation.

There's a false dichotomy here between 'deterministic creation' and 'vibing'.

I use Claude all day. It has written, under my close supervision¹, the majority of my new web app. As a result I estimate the process took 10x less time than had I not used Claude, and I estimate the code to be 5x better quality (as I am a frankly mediocre developer).

But I understand what the code does. It's just Astro and TypeScript. It's not magic. I understand the entire thing; not just 'the prompt that started it'.

¹I never fire-and-forget. I prompt-and-watch. Opus 4.7 still needs to be monitored.

In what world to developers “understand” pieces like React, Pandas, or Cuda? Developers only have a superficial understanding of the tools they are developing with.

Some developers, I usually end up fixing bugs in OSS I use