> When you spend two years making useless Arduino projects, you develop instincts about electronics, materials, and design that you can’t get from a tutorial. When vibe coding goes straight to production, you lose that developmental space. The tool is powerful enough to produce real output before the person using it has developed real judgment.

The crux of the problem. The only way to truly know is to get your hands dirty. There are no shortcuts, only future liabilities.

Then again, sophisticated manufactured electronics had long been cheap and available by the time somebody thought to create Arduino as a platform in the first place.

And even today, people hack on assembly and ancient mainframe languages and demoscene demos and Atari ROMs and the like (mainly for fun but sometimes with the explicit intention of developing that flavor of judgment).

I predict with high confidence that not even Claude will stop tinkerers from tinkering.

All of our technical wizardry will become anachronistic eventually. Here I stand, Ozymandius, king of motorcycle repair, 16-bit assembly, and radio antennae bent by hand…

Nah.

There are corners of the industry where people still write ASM by hand when necessary, but for the vast, vast majority it's neither necessary (because compilers are great) or worthwhile (because it's so time consuming).

Most code is written in high-level, interpreted languages with no particular attention paid to its performance characteristics. Despite the frustration of those of us who know better, businesses and users seem to choose velocity over quality pretty consistently.

LLM output is already good enough to produce working software that meets the stated requirements. The tooling used to work with them is improving rapidly. I think we're heading towards a world where actually inspecting and understanding the code is unusual (like looking at JVM/Python bytecode is today).

Future liabilities? Not any more than we're currently producing, but produced faster.

"users seem to choose velocity over quality pretty consistently"

When do they have a real choice, without vendor lock-in or other pressure?

Windows 11 is 4 years old but until a few months ago barely managed to overtake Windows 10. Despite upgrades that were only "by choice" in the most user hostile sense imaginable (those dark patterns were so misleading I know multiple people who didn't notice that they "agreed" to it, and as it pop ups repeatedly it only takes a single wrong click to mess up). It doesn't look like people are very excited about the "velocity".

In the gaming industry AAA titles being thrown on the market in an unfinished state tends to also not go over well with the users, but there they have more power to make a choice as the market is huge and games aren't necessary tools, and such games rarely recover after a failed launch.

Compilers take a formal language and translate it to another formal language. In most cases there is no ambiguity, it’s deterministic, and most importantly it’s not chaotic.

That is changing one word in the source code doesn’t tend to produce a vastly different output, or changes to completely unrelated code.

Because the LLM is working from informal language, it is by necessity making thousands of small (and not so small) decisions about how to translate the prompt into code. There are far more decisions here than can reasonably fixed in tests/specs. So any changes to the prompt/spec is likely to result in unintended changes to observable behavior that users will notice and be confused by.

You’re right that programmers regularly churn out unoptimized code. But that’s very different than churning out a bubbling morass where ever little thing that isn’t bolted down is constantly changing.

The ambiguity in translation from prompt to code means that the code is still the spec and needs to be understood. Combine that with prompt instability and we’ll be stuck understanding code for the foreseeable future.

[deleted]

You're absolutely right -- that's the crux of the problem. There are no shortcuts, only future liabilities.

If you didn't catch it, this is a joke calling out the comment above it for using a couple obvious LLM-isms. The comment above may have been a joke, too. It's hard to tell any more.

> It's hard to tell any more.

Wait, I think I have the answer!

"You're in a desert, walking along in the sand when all of a sudden you look down and see a tortoise. It's crawling toward you. You reach down and flip the tortoise over on its back. The tortoise lays on its back, its belly baking in the hot sun, beating its legs trying to turn itself over. But it can't. Not without your help. But you're not helping. Why is that?"

What do you mean I'm not helping?

Hm... Why? Ah! Because you are also a tortoise

Tortoise have been observed righting other tortoise that have become stuck. https://www.youtube.com/shorts/DZ57D608fiM (two tortoises helping a third) this has a terrible voiceover but you get the idea

> You're absolutely right

Bot detected

But crucially they used "--" and not "—" which means they're safe. Unless it's learning. I may still be peeved that my beloved em dash has been tainted. :(

Of course they'll learn. LLM bots have been spotted on HN using that hipster all lower case style of writing.

i can write like this if i want. or if i were a clever ai bot.

No need to be clever, just add the instruction to write in that way.

Never admit when someone else is right. They'll forget they were right and begin to think they won a fight.

Or something. You're right.

I think that's the joke.

I found the key insight -- when a human tries to sound like an LLM, that's perceived by other humans as humor.

Not sarcasm. Not cynism. Just pure humor.

Oh my God, this is peak GPT.

The issue is clear

Yep. Increases output but reduces understanding.

Couldn't one rebut that Arduino is plug-and-play without getting your hands dirty in lower-level electronics?

The article addresses this by making the point that prototypes != production. Arduino is great for prototyping (authors opinion; I have limited experience) but not for production-level manufacturing.

LLMs are effectively (from this article's pov) the "Arduino of coding" but due to their nature, are being misunderstood/misrepresented as production-grade code printers when really they're just glorified MVP factories.

They don't have to be used this way (I use LLMs daily to generate a ton of code, but I do it as a guided, not autonomous process which yields wildly different results than a "vibed" approach), but they are because that's the extent of most people's ability (or desire) to understand them/their role/their future beyond the consensus and hype.

I think even calling them MVP factories is a bit much. They're demo factories. Minimum Viable Products shouldn't have glaring security vulnerabilities and blatant inefficiency, they just might me missing nice-to-have features.

Not, because it isn’t. A plug and play arduino is useless without some level of circuit building expertise.

HA hey before you code then hope you roll your own silicon because otherwise its just shortcuts.

This is such high minded bullshit.

I might be tilting at a strawman of your definition of vibe coding - apologies in advance if so.

But LLM-aided development is helping me get my hands dirty.

Last weekend, I encountered a bug in my Minecraft server. I run a small modded server for my kids and I to play on, and a contraption I was designing was doing something odd.

I pulled down the mod's codebase, the fabric-api codebase (one of the big modding APIs), and within an hour or so, I had diagnosed the bug and fixed it. Claude was essential in making this possible. Could I have potentially found the bug myself and fixed it? Almost certainly. Would I have bothered? Of course not. I'd have stuck a hopper between the mod block and the chest and just hacked it, and kept playing.

But, in the process of making this fix, and submitting the PR to fabric, I learned things that might make the next diagnosis or tweak that much easier.

Of course it took human judgment to find the bug, characterize it, test it in-game. And look! My first commit (basically fully written by Claude) took the wrong approach! [1]

Through the review process I learned that calling `toStack` wasn't the right approach, and that we should just add a `getMaxStackSize` to `ItemVariantImpl`. I got to read more of the codebase, I took the feedback on board, made a better commit (again, with Claude), and got the PR approved. [2]

They just merged the commit yesterday. Code that I wrote (or asked to have written, if we want to be picky) will end up on thousands of machines. Users will not encounter this issue. The Fabric team got a free bugfix. I learned things.

Now, again - is this a strawman of your point? Probably a little. It's not "vibe coding going straight to production." Review and discernment intervened to polish the commit, expertise of the Fabric devs was needed. Sending the original commit straight to "production" would have been less than ideal. (arguably better than leaving the bug unfixed, though!)

But having an LLM help doesn't have to mean that less understanding and instinct is built up. For this case, and for many other small things I've done, it just removed friction and schlep work that would otherwise have kept me from doing something useful.

This is, in my opinion, a very good thing!

[1]: https://github.com/FabricMC/fabric-api/pull/5220/changes/3e3...

[2]: https://github.com/FabricMC/fabric-api/pull/5220/changes