A good analogy might be synthesized music.
In the early days, the interfaces were so complex and technical, that only engineers could use them.
Some of these early musicians were truly amazing individuals; real renaissance people. They understood the theory, and had true artistic vision. The knew how to ride the tiger, and could develop great music, fairly efficiently.
A lot of others, not so much. They twiddled knobs at random, and spent a lot of effort, panning for gold dust. Sometimes, they would have a hit, but they wasted a lot of energy on dead ends.
Once the UI improved (like the release of the Korg M1 sampler), then real artists could enter the fray, and that’s when the hockey stick bent.
Not exactly sure what AI’s Korg M1 will be, but I don’t think we’re there, yet.
I have been a lead engineer for a few decades now, responsible for training teams and architecting projects. And I've been working heavily with AI.
I know how to get Claude multi-agent mode to write 2,500 lines of deeply gnarly code in 40 minutes, and I know how to get that code solid. But doing this absolutely pulls on decades on engineering skill. I read all the core code. I design key architectural constraints. I invest heavily in getting Claude to build extensive automated verification.
If I left Claude to its own devices, it would still build stuff! But with me actively in the loop, I can diagnose bad trends. I can force strategic investments in the right places at the right times. I can update policy for the agents.
If we're going to have "software factories", let's at least remember all the lessons from Toyota about continual process improvement, about quality, about andon cords and poke-yoke devices, and all the rest.
Could I build faster if I stopped reading code? Probably, for a while. But I would lose the ability to fight entropy, and entropy is the death of software. And Claude doesn't fight entropy especially well yet, not all by itself.
What I've found out is that a lot of people don't actually care. They see it work and that's that. It's impossible to convince them otherwise. The code can be absolutely awful but it doesn't matter because it works today.
That's been my experience, too.
I have been able to write some pretty damn ambitious code, quickly, with the help of LLMs, but I am still really only using it for developing functions, as opposed to architectures.
But just this morning, I had it break up an obese class into components. It did really well. I still need to finish testing everything, but it looks like it nailed it.
[dead]
I like the analogy but I think you are underestimating how much random knob twiddling there is in all art.
Francis Bacon and The Brutality of Fact is a wonderful documentary that goes over this. Bacon's process was that he painted every day for a long time, kept the stuff he liked and destroyed the crap. You are just not seeing the bad random knob twiddling he did.
Picasso is even better. Picasso had some 100,000 works. If you look at a book that really gets deep to the more obscure stuff, so much of Picasso is half finished random knob twiddling garbage. Stuff that would be hard to guess is even by Picasso. There is this myth of the genius artist with all the great works being this translation of the fully formed vision to the medium.
In contrast, even the best music from musical programming languages is not that great. The actual good stuff is so very thin because it is just so much effort involved in the creation.
I would take the analogy further that vibe coding in the long run probably develops into the modern DAW while writing c by hand is like playing Paganini on the violin. Seeing someone playing Paganini in person makes it laughable that the DAW can replace a human playing the violin at a high level. The problem though is the DAW over time changes music itself and people's relation to music to the point it makes playing Paganini in person on the violin a very niche art form with almost no audience.
I read the argument on here ad nauseam about how playing the violin won't be replaced and that argument is not wrong. It is just completely missing the forest for the trees.
I think this is very well stated. I’m gonna say something that’s far more trite, but what I’ve noticed is that in an effort to get a better result while assisting AI coding, I have to throw away any concept I previously held about good code hygiene and style. I want it to be incredibly verbose. I want to have everything explicitly asserted. I want to have tests and hooks for every single thing. I want it to be incredibly hard for the human to work directly on it …
I think we are. I'm helping somebody who has a non-technical background and taught himself how to vibe code and built a thing. The code is split into two GitHub repos when it should have been one, and one of the repos is named hetzner-something because that's what he's using and he "doesn't really understand tech shit"
That sounds a lot like “twiddling knobs at random,” to me.
Exactly. The fact that an LLM isn't very good at helping you fix basic organizational issues like this is emblematic. Quoting the article: "We have automated coding, but not software engineering."
> Sometimes, they would have a hit, but they wasted a lot of energy on dead ends.
We'll see which one it is in a few months.
Common sense.
If you can use an imperfect tool, perfectly, you’ll beat people using them imperfectly. As long as the tool is imperfect, you won’t have much competition.
That’s where we are, right now. Good engineers are learning how to use klunky LLMs. They will beat out the Dunning-Kruger crew.
Once the tool becomes perfect, then that allows less-technical users into the tent, which means a much larger pool of creativity.
Synths don't generate music, they generate tones. The analogy would be a program that generates really good programming languages.