We said the same thing when 3D printing came out. Any sort of cool tech, we think everybody’s going to do it. Most people are not capable of doing it. in college everybody was going to be an engineer and then they drop out after the first intro to physics or calculus class. A bunch of my non tech friends were vibe coding some tools with replit and lovable and I looked at their stuff and yeah it was neat but it wasn't gonna go anywhere and if it did go somewhere, they would need to find somebody who actually knows what they're doing. To actually execute on these things takes a different kind of thinking. Unless we get to the stage where it's just like magic genie, lol. Maybe then everybody’s going to vibe their own software.

I don't think claude code is like 3d printing.

The difference is that 3D printing still requires someone, somewhere to do the mechanical design work. It democratises printing but it doesn't democratise invention. I can't use words to ask a 3d printer to make something. You can't really do that with claude code yet either. But every few months it gets better at this.

The question is: How good will claude get at turning open-ended problem statements into useful software? Right now a skilled human + computer combo is the most efficient way to write a lot of software. Left on its own, claude will make mistakes and suffer from a slow accumulation of bad architectural decisions. But, will that remain the case indefinitely? I'm not convinced.

This pattern has already played out in chess and go. For a few years, a skilled Go player working in collaboration with a go AI could outcompete both computers and humans at go. But that era didn't last. Now computers can play Go at superhuman levels. Our skills are no longer required. I predict programming will follow the same trajectory.

There are already some companies using fine tuned AI models for "red team" infosec audits. Apparently they're already pretty good at finding a lot of creative bugs that humans miss. (And apparently they find an extraordinary number of security bugs in code written by AI models). It seems like a pretty obvious leap to imagine claude code implementing something similar before long. Then claude will be able to do security audits on its own output. Throw that in a reinforcement learning loop, and claude will probably become better at producing secure code than I am.

There is verification and validation.

The first part is making sure you built to your specification, the second thing is making sure you built specification was correct.

The second part is going to be the hard part for complex software and systems.

> I can't use words to ask a 3d printer to make something.

You can: the words are in the G-code language.

I mean: you are used to learn foreign languages in school, so you are already used to formulate your request in a different language to make yourself understood. In this case, this language is G-code.

Its not our current location, but our trajectory that is scary.

The walls and plateaus that have been consistently pulled out from "comments of reassurance" have not materialized. If this pace holds for another year and a half, things are going to be very different. And the pipeline is absolutely overflowing with specialized compute coming online by the gigawatt for the foreseeable future.

So far the most accurate predictions in the AI space have been from the most optimistic forecasters.

Thank you for posting this.

Im really tired, and exhausted of reading simple takes.

Grok is a very capable LLM that can produce decent videos. Why are most garbage? Because NOT EVERYONE HAS THE SKILL NOR THE WILL TO DO IT WELL!

The answer is taste.

I don't know if they will ever get there, but LLMs are a long ways away from having decent creative taste.

Which means they are just another tool in the artist's toolbox, not a tool that will replace the artist. Same as every other tool before it: amazing in capable hands, boring in the hands of the average person.

Also, if you are a human who does taste, it's very difficult to get an AI to create exactly what you want. You can nudge it, and little by little get closer to what you're imagining, but you're never really in control.

This matters less for text (including code) because you can always directly edit what the AI outputs. I think it's a lot harder for video.

> Also, if you are a human who does taste, it's very difficult to get an AI to create exactly what you want.

I wonder if it would be possible to fine train an AI model on my own code. I've probably got about 100k lines of code on github. If I fed all that code into a model, it would probably get much better at programming like me. Including matching my commenting style and all of my little obsessions.

Talking about a "taste gap" sounds good. But LLMs seem like they'd be spectacularly good at learning to mimic someone's "taste" in a fine train.

100% correct. Taste is the correct term - I avoid using it as Im not sure many people here actually get what it truly means.

How can I proclaim what I said in the comment above? Because Ive spent the past week producing something very high quality with Grok. Has it been easy? Hell no. Could anyone just pick up and do what Ive done? Hell no. It requires things like patience, artistry, taste etc etc.

The current tech is soul-less in most people hands and it should remain used in a narrow range in this context. The last thing I want to see is low quality slop infesting the web. But hey that is not what the model producers want - they want to maximize tokens.

The job of a coder has far from become obsolete, as you're saying. It's definitely changed to almost entirely just code review though.

With Opus 4.6 I'm seeing that it copies my code style, which makes code review incredibly easy, too.

At this point, I've come around to seeing that writing code is really just for education so that you can learn the gotchas of architecture and support. And maybe just to set up the beginnings of an app, so that the LLM can mimic something that makes sense to you, for easy reading.

And all that does mean fewer jobs, to me. Two guys instead of six or more.

All that said, there's still plenty to do in infrastructure and distributed systems, optimizations, network engineering, etc. For now, anyway.

You can basically hand it a design, one that might take a FE engineer anywhere from a day to a week to complete and Codex/Claude will basically have it coded up in 30 seconds. It might need some tweaks, but it's 80% complete with that first try. Like I remember stumbling over graphing and charting libraries, it could take weeks to become familiar with all the different components and APIs, but seemingly you can now just tell Codex to use this data and use this charting library and it'll make it. All you have to do is look at the code. Things have certainly changed.

I figure it takes me a week to turn the output of ai into acceptable code. Sure there is a lot of code in 30 seconds but it shouldn't pass code review (even the ai's own review).

For now. Claude is worse than we are at programming. But its improving much faster than I am. Opus 4.6 is incredible compared to previous models.

How long before those lines cross? Intuitively it feels like we have about 2-3 years before claude is better at writing code than most - or all - humans.

It might be 80-95% complete but the last 5% is either going to take twice the time or be downright impossible.

> You can basically hand it a design

And, pray tell, how people are going to come up with such design?

Honestly you could just come up with a basic wireframe in any design software (MS paint would work) and a screen shot of a website with a design you like and tell it "apply the aesthetic from the website in this screenshot to the wireframe" and it would probably get 80% (probably more) of the way there. Something that would have taken me more than a day in the past.

I've been in web design since images were first introduced to browsers and modern designs for the majority of sites are more templated than ever. AI can already generate inspiration, prototypes and designs that go a long way to matching these, then juice them with transitions/animations or whatever else you might want.

The other day I tested an AI by giving it a folder of images, each named to describe the content/use/proportions (e.g., drone-overview-hero-landscape.jpg), told it the site it was redesigning, and it did a very serviceable job that would match at least a cheap designer. On the first run, in a few seconds and with a very basic prompt. Obviously with a different AI, it could understand the image contents and skip that step easily enough.

Not really. What the FE engineer will produce in a week will be vastly different from what the AI will produce. That's like saying restaurants are dead because it takes a minute to heat up a microwave meal.

It does make the lowest common denominator easier to reach though. By which I mean your local takeaway shop can have a professional looking website for next to nothing, where before they just wouldn't have had one at all.

I think exceptional work, AI tools or not, still takes exceptional people with experience and skill. But I do feel like a certain level of access to technology has been unlocked for people smart enough, but without the time or tools to dive into the real industry's tools (figma, code, data tools etc).

The local takeaway shop could have had a professional looking website for years with Wix, Squarespace, etc. There are restaurant specific solutions as well. Any of these would be better than vibe coding for a non-tech person. No-code has existed for years and there hasn't been a flood of bespoke software coming from end users. I find it hard to believe that vibe-coding is easier or more intuitive than GUI tooling designed for non-experts...

I think the idea that LLM's will usher in some new era where everyone and their mom are building software is a fantasy.

I more or less agree specifically on the angle that no-code has existed, yet non-technical people still aren't executing on technical products. But I don't think vibe-coding is where we see this happening, it will be in chat interfaces or GUIs. As the "scafolding" or "harnesses" mature more, and someone can just type what they want, then get a deployed product within the day after some back and forth.

I am usually a bit of an AI skeptic but I can already see that this is within the realm of possibility, even if models stopped improving today. I think we underestimate how technical things like WIX or Squarespace are, to a non-technical person, but many are skilled business people who could probably work with an LLM agent to get a simple product together.

People keep saying code was never the real skill of an engineer, but rather solving business logic issues and codifying them. Well people running a business can probably do that too, and it would be interesting to see them work with an LLM to produce a product.

The number of non-technical people in my orbit that could successfully pull up Claude code and one shot a basic todo app is zero. They couldn’t do it before and won’t be able to now.

They wouldn’t even know where to begin!

You go to chatGPT and say "produce a detailed prompt that will create a functioning todo app" and then put that output into Claude Code and you now have a TODO app.

Maybe I’m biased working in insurance software, but I don’t get the feeling much programming happens where the code can be completely stochastically generated, never have its code reviewed, and that will be okay with users/customers/governments/etc.

Even if all sandboxing is done right, programs will be depended on to store data correctly and to show correct outputs.

You don't need to draw the line between tech experts and the tech-naive. Plenty of people have the capability but not the time or discipline to execute such a thing by hand.

> To actually execute on these things takes a different kind of thinking

Agreed. Honestly, and I hate to use the tired phrase, but some people are literally just built different. Those who'd be entrepreneurs would have been so in any time period with any technology.

This goes well along with all my non-tech and even tech co-workers. Honestly the value generation leverage I have now is 10x or more then it was before compared to other people.

HN is a echo chamber of a very small sub group. The majority of people can’t utilize it and needs to have this further dumbed down and specialized.

That’s why marketing and conversion rate optimization works, its not all about the technical stuff, its about knowing what people need.

For funded VC companies often the game was not much different, it was just part of the expenses, sometimes a lot sometimes a smaller part. But eventually you could just buy the software you need, but that didn’t guarantee success. Their were dramatic failures and outstanding successes, and I wish it wouldn’t but most of the time the codebase was not the deciding factor. (Sometimes it was, airtable, twitch etc, bless the engineers, but I don’t believe AI would have solved these problems)

> The majority of people can’t utilize it

Tbh, depending on the field, even this crowd will need further dumbing down. Just look at the blog illustration slops - 99% of them are just terrible, even when the text is actually valuable. That's because people's judgement of value, outside their field of expertise, is typically really bad. A trained cook can look at some chatgpt recipe and go "this is stupid and it will taste horrible", whereas the average HN techbro/nerd (like yours truly) will think it's great -- until they actually taste it, that is.

Agreed. This place amazes in regards to how overly confident some people feel stepping outside of their domains.. the mistakes I see here in relation to talking about subject areas associated with corporate finance, valuation etc is hilarious. Truly hilarious.

3 things

1) I don’t disagree with the spirit of your argument

2) 3D printing has higher startup costs than code (you need to buy the damn printer)

3) YOU are making a distinction when it comes to vibe coding from non-tech people. The way these tools are being sold, the way investments are being made, is based on non-domain people developing domain specific taste.

This last part “reasonable” argument ends up serving as a bait and switch, shielding these investments. I might be wrong, but your comment doesn’t indicate that you believe the hype.