>I don’t really understand the, “more, better, faster,” cachet to be honest. Writing the code hasn’t been the bottle neck to developing software for a long time. It’s usually the thinking that takes most of the time and if that goes away well… I dunno, that’s weird. I will understand it even less.

This is what I've always found confusing as well about this push for AI. The act of typing isn't the hard part - its understanding what's going on, and why you're doing it. Using AI to generate code is only faster if you try and skip that step - which leads to an inevitable disaster

> The act of typing isn't the hard part - its understanding what's going on, and why you're doing it. Using AI to generate code is only faster if you try and skip that step - which leads to an inevitable disaster

It’s more than just typing though. A simple example remembering the exact incantation of CSS classes to style something that you can easily describe in plain English.

Yes, you could look them up or maybe even memorize them. But there’s no way you can make wholesale changes to a layout faster than a machine.

It lowers the cost for experimentation. A whole series of “what if this was…” can be answered with an implementation in minutes. Not a whole afternoon on one idea that you feel a sunk cost to keep.

> It’s more than just typing though. A simple example remembering the exact incantation of CSS classes to style something that you can easily describe in plain English.

Do that enough and you won't know enough about your codebase to recognise errors in the LLM output.

imo a question is, do you still need to understand the codebase? What if that process changes and the language you’re reading is a natural one instead of code?

> What if that process changes and the language you’re reading is a natural one instead of code?

Okay, when that happens, then sure, you don't need to understand the codebase.

I have not seen any evidence that that is currently the case, so my observation that "Continue letting the LLM write your code for you, and soon you won't be able to spot errors in its output" is still applicable today.

When the situation changes, then we can ask if it is really that improtant to understand the code. Until that happens, you still need to understand the code.

The same logic applies to your statement:

> Do that enough and you won't know enough about your codebase to recognise errors in the LLM output.

Okay, when that happens, then sure, you'll have a problem.

I have not seen any evidence that that is currently the case i.e. I have no problems correcting LLM output when needed.

When the situation changes, then we can talk about pulling back on LLM usage.

And the crucial point is: me.

I'm not saying that everyone that uses LLM to generate code won't fall into "not able to use LLM generated code".

I now generate 90% of the code with LLM and I see no issues so far. Just implementing features faster. Fixing bugs faster.

You do have a point but as the sibling comment pointed out, the negative eventuality you are describing also has not happened for many devs.

I quite enjoy being much more of an architect than I could compared to 90% of my career so far (24 years in total). I have coded my fingers and eyes out and I spot idiocies in LLM output from trivially easy to needing an hour carefully reviewing.

So, I don't see the "soon" in your statement happening, ahem, anytime soon for me, and for many others.

What happens when your LLM of choice goes on an infinite loop failing to solve a problem?

What happens when your LLM provider goes down during an incident?

What happens when you have an incident on a distributed system so complex that no LLM can maintain a good enough understanding of the system as a whole in a single session to spot the problem?

What happens when the LLM providers stop offering loss leader subscriptions?

AFAIK everything I use has timeouts, retries, and some way of throwing up its hands and turning things back to me.

I use several providers interchangeably.

I stay away from overly complex distributed systems and use the simplest thing possible.

I plan to wait for some guys in China to train a model on traces that I can run locally, benefitting from their national “diffusion” strategy and lack of access to bleeding-edge chips.

I’m not worried.

> What if that process changes and the language you’re reading is a natural one instead of code?

Natural language is not a good way to specify computer systems. This is a lesson we seem doomed to forget again and again. It's the curse of our profession: nobody wants to learn anything if it gets in the way of the latest fad. There's already a historical problem in software engineering: the people asking for stuff use plain language, and there's a need to convert it to a formal spec, and this takes time and is error prone. But it seems we are introducing a whole new layer of lossy interpretation to the whole mess, and we're doing this happily and open eyed because fuck the lessons of software engineering.

I could see LLMs being used to check/analyze natural language requirements and help turn them into formal requirements though.

> But it seems we are introducing a whole new layer of lossy interpretation to the whole mess (...)

I recommend you get acquainted with LLMs and code assistants, because a few of your assertions are outright wrong. Take for example any of the mainstream spec-driven development frameworks. All they do is walk you through the SRS process using a set of system prompts to generate a set of documents featuring usecases, functional requirements, and refined tasks in the form of an actionable plan.

Then you feed that plan to a LLM assistant and your feature is implemented.

I seriously recommend you check it out. This process is far more structured and thought through than any feature work that your average SDE ever does.

> I recommend you get acquainted with LLMs and code assistants

I use them daily, thanks for your condescension.

> I could see LLMs being used to check/analyze natural language requirements and help turn them into formal requirements though.

Did you read this part of my comment?

> Take for example any of the mainstream spec-driven development frameworks. All they do is walk you through the SRS process using a set of system prompts to generate a set of documents featuring usecases, functional requirements, and refined tasks in the form of an actionable plan.

I'm not criticizing spec-driven development frameworks, but how battle-tested are they? Does it remove the inherent ambiguity in natural language? And do you believe this is how most people are vibe-coding, anyway?

> Did you read this part of my comment?

Yes, and your comment contrasts heavily with the reality of using LLMs as code assistants, as conveyed in comments such as "a whole new layer of lossy interpretation. This is profoundly wrong, even if you use LLMs naively.

I repeat: LLM assistants have been used to walk users through software requirements specification processes that not only document exactly what usecases and functional requirements your project must adhere to, but also create tasks and implement them.

The deliverable is both a thorough documentation of all requirements considered up until that point and the actual features being delivered.

To drive the point home, even Microsoft of all companies provides this sort of framework. This isn't an arcane, obscure tool. This is as mainstream as it can be.

> I'm not criticizing spec-driven development frameworks, but how battle-tested are they?

I really recommend you get acquainted with this class of tools, because your question is in the "not even wrong" territory. Again, the purpose of these tools is to walk developers through a software requirements specification process. All these frameworks do is put together system prompts to help you write down exactly what you want to do, break it down into tasks, and then resume the regular plan+agent execution flow.

What do you think "battle tested" means in this topic? Check if writing requirements specifications is something worth pursuing?

I repeat: LLM assistants lower formal approaches to the software development lifecycle by orders of magnitude, to the point you can drive each and every single task with a formal SRS doc. This isn't theoretical, it's month's old stuff. The focus right now is to remove human intervention from the SRS process as well with the help of agents.

> Yes, and your comment contrasts heavily with the reality of using LLMs as code assistants, as conveyed in comments such as "a whole new layer of lossy interpretation. This is profoundly wrong, even if you use LLMs naively.

Most people, when told they sound condescending, try to reframe their argument in order to remove this and become more convincing.

Sadly, you chose to double down instead. Not worth pursuing.

> This isn't theoretical, it's month's old stuf

Hahaha! "Months old stuff"!

Disengaging from this conversation. Over and out.

That's a bold assertion without any proof.

It also means you're so helpless as a developer that you could never debug another person's code, because how would you recognize the errors, you haven't made them yourself.

> It lowers the cost for experimentation. A whole series of “what if this was…”

Anecdotal, but I've noticed while this is true it also adds the danger of knowing when to stop.

Early on I would take forever trying to get something exactly to whats in my head. Which meant I would spend too much time in one sitting then if I had previously built it by hand.

Now I try to time box with the mindset "good enough".

> But there’s no way you can make wholesale changes to a layout faster than a machine.

You lost me here. I can make changes very quickly once I understand both the problem and the solution I want to go with. Modifying text is quite easy. I spend very little time doing it as a developer.

This is not correct. CSS is the style rules for all rendering situations of that HTML, not just your single requirement that it "looks about right" in your narrow set of test cases.

Nobody writing production CSS for a serious web page can avoid rewriting it. Nobody is memorizing anything. It's deeply intertwined with the requirements as they change. You will eventually be forced to review every line of it carefully as each new test is added or when the HTML is changed. No AI is doing that level of testing or has the training data to provide those answers.

It sounds like you're better off not using a web page at all if this bothers you. This isn't a deficiency of CSS. It's the main feature. It's designed to provide tools that can cover all cases.

If you only have one rendering case, you want an image. If you want to skip the code, you can just not write code. Create a mockup of images and hand it off to your web devs.

Eh, I've written so much CSS and I hate it so much I use AI to write it now not because it's faster or better at doing so, just so I don't need to do it.

So AI is good for CSS? That’s fine, I always hated CSS.

Don't worry. In a few years we'll be like the COBOL programmers who still understand how things work, our brains haven't atrophied, and we make good money fixing the giant messes created by others.

Sounds awful. I'm not interested in fixing giant messes. I'll just be tinkering away making little things (at scale) where the scope is very constrained and the fixing isn't needed.

People can do their vibecoding to make weird rehackings of stuff I did, almost always to make it more mainstream, limited, and boring, and usually to some mainstream acclaim. And they can flame out, not my problem.

I'm not fixing anybody's giant mess. I'm doing the equivalent of simply refusing to give up COBOL. To stop me, people will have to EOL a huge amount of working useful stuff for no good reason and replace it with untrustworthy garbage.

I am aware this is exactly the plan on so many levels. Bring it. I don't think it's going to be popular, or rather: I think only at this historical moment can you get away with that and not immediately be called on it, as a charlatan.

When our grandest celebrity charlatans go in the bin, the time for vibecoding will truly be over.

AI not just types code for you. It can assist with almost every part of software development. Design, bug hunting, code review, prototyping, testing.

It can even create a giant ball of mud ten times faster than you can.

A Luddite farm worker can assist in all those things, the question is, can it assist in a useful manner?

Not only it can but it does.

Just as I was reading this claude implemented a drag&drop of images out of SumatraPDF.

I asked:

> implement dragging out images; if we initiate drag action and the element under cursor is an image, allow dragging out the image and dropping on other applications

then it didn't quite work:

I'm testing it by trying to drop on a web application that accepts dropped images from file system but it doesn't work for that

Here's the result: https://github.com/sumatrapdfreader/sumatrapdf/commit/58d9a4...

It took me less than 15 mins, with testing.

Now you tell me:

1. Can a farm worker do that?

2. Can you improve this code in a meaningful way? If you were doing a code review, what would you ask to be changed?

3. How long would it take you to type this code?

Here's what I think: No. No. Much longer.

The code is really bad, so I'd have a lot to say about it in a review. Couldn't do it in 15 minutes, though.

Why is it using a temp file? Is there really no more elegant way to pass around pointers to images than spilling to disk?

Of course there is, but slop generators be slopping

What is it, o wise person stingy with the information.

I admire you for what you've created wrt Sumatra. It's an excellent piece of software. But, as a matter of principle, I refuse to knowingly contribute to codebases using AI to generate code, including drive-by hints, suggestions, etc.

You, or rather Claude, are not the first to solve this problem and there are examples of better solutions out there. Since you're willing to let Claude regurgitate other people's work, feel free to look it up yourself or have Claude do it for you.

It always seemed to me like its lootbox behavior. Highly addictive for the dopamine hit you get.

"This is what I've always found confusing as well about this push for AI."

I think it's a few things converging. One is that software developers have become more expensive for US corporations for several reasons and blaming layoffs on a third party is for some reason more palatable to a lot of people.

Another is that a lot of decision makers are pretty mediocre thinkers and know very little about the people they rule over, so they actually believe that machines will be able to automate what software developers do rather than what these decision makers do.

Then there's the ever-present allure of the promise that middle managers will somehow wrestle control over software crafts from the nerds, i.e. what has underpinned low-code business solutions for ages and always, always comes with very expensive consultants, commonly software developers, on the side.

> This is what I've always found confusing as well about this push for AI.

They want you to pay for their tokens at their casino and rack up a 5 - 6 figure bill.

> This is what I've always found confusing as well about this push for AI. The act of typing isn't the hard part - its understanding what's going on, and why you're doing it.

This is a very superficial and simplistic analysis of the whole domain. Programmers don't "type". They apply changes to the code. Pressing buttons in a keyboard is not the bottleneck. If that was the case, code completion and templating would have been a revolutionary, world changing development in the field.

The difficult part is understanding what to do and how to do it, and why. It turns out LLMs can handle all these types of task. You are onboarding onto a new project? Hit a LLM assistant with /explain. You want to implement a feature that matches a specific requirement? You hit your LLM assistant with /plan followed by apply. You want to cover some code with tests? You hit your LLM assistant with /tests.

In the end you review the result,and do with it whatever you want. Some even feel confident enough to YOLO the output of the LLM.

So while you still try to navigate through files, others already have features out.