I would encourage my competitors to use AI agents on their codebase as much as possible. Make sure every new feature has it, lots of velocity! Run those suckers day and night. Don't review it, just make sure the feature is there! Then when the music stops, the AI companies hit the economic realities, go insolvent, and they are left with no one who understands a sprawling tangled web of code that is 80% AI generated, then we'll see who laughs last.
> they are left with no one who understands a sprawling tangled web of code that is 80% [random people that I can't ask because they don't work here anymore and they didn't care to leave docs or comments] generated, then we'll see who laughs last.
Yes, this matches my experience with codebases before AI was a thing.
Yes, but given a feature that should take say 100 lines of code, the average programmer will write in the order of 100 to 500 lines. If they're a heavy OOP user, maybe they'll write 10 classes that total 2000 lines. Regardless, worst case, it will be within ~2 orders of magnitude of a reasonable solution.
It's not that they're not trying to write the biggest clusterfuck possible and maximize suffering in the world, it's just that there's a human limit on how much garbage they can type out in their allocated time.
This is where AI revolutionizes things. You want 25,000 lines of React? On the backend? And a custom useEffect-backed database? Certainly!
> it's just that there's a human limit on how much garbage they can type out in their allocated time.
Another example where removing friction and constraints is a bad thing.
i think the friction has moved upstream - now it's working on the right thing and specifying what correct looks like. i don't think we are going back to a world where we will write code by hand again.
Unless what you want to do isn't well represented in the training set.
Yeah, in the past the limiting factor was the human suffering of the engineer who had to try and fit the sprawling nightmare fuel into their brain.
The machine doesn't suffer. Or if it does nobody cares. People eventually start having panic attacks, the machine can just be reset.
I suspect that the end result is just driving further into the wilderness before reality sets in and you have to call an adult.
Both be true at the same time: some teams spend a fortune on AI and the AI investments won't get the expected ROI (bubble collapse). What is sure is that a lot of capacity is been built and that capacity won't disappear.
What I could see happening in your scenario is the company suffers from diminishing return as every task becomes more expensive (new feature, debugging session, library update, refactoring, security audit, rollouts, infra cost). They could also end up with an incoherent gigantic product that doesn't make sense to their customer.
Both pitfall are avoidable, but they require focus and attention to detail. Things we still need humans for.
> What is sure is that a lot of capacity is been built and that capacity won't disappear.
They really are subsidizing what will be an incredibly healthy used server equipment market in a year or two. Can’t wait. My homelab is going to be due for an upgrade.
Qwen3 Coder Next and Qwen3.5-35B-A3B already very good and can be run on today's higher end home computers with good speed. Tomorrow's machines will not be slower but models are keep getting more efficient. A good sw engineer still would be valuable in Tomorrow's world but not as a software assembler.
Even cutting edge models are not very good. They are not even on mediocre level. Don’t get me wrong, they are improving, and they are awesome, but they are nowhere near good yet. Vibe coded projects have more bugs than features, their architecture and design system are terrible, and their tests are completely useless about half the time. If you want a good product you need to rewrite almost everything what’s written by LLMs. Probably this won’t be the case in a few years, but now even “very good” LLMs are not very good at all.
With Claude Code now having a /plan mode - you can take your time and deliberate through architecture and design, collaboratively, instead of just sending a fire-and-forget. Much less buggy and saves time if you keep an eye on the output as you go, guiding it and catching defects, imho.
Not sure why you're being downvoted, this is very much my experience. When it matters (like, customer data is on the line) vibecoded projects are not just hilariously bad, but put you in legal danger.
We've so far found that Claude code is fine as a kind of better Coverity for uncovering memory leaks and similar. You have to check its work very carefully because about 1 time in 5 it just gets stuff wrong. It's great that it gets stuff right 4 times in 5 and produces natural code that fits into the style of the existing project, but it's nothing earth-shattering. We've had tools to detect memory leaks before.
We had someone attempt to translate one of our existing projects into Rust and the result was just wrong at a fundamental level. It did compile and pass its own tests, so if you had no idea about the problem space you might even have accepted its work.
[dead]
> Tomorrow's machines will not be slower
The way it's going, the AI hyperscalers are buying such a big portion of the world's hardware, that it may very well happen that tomorrow's machines do get slower per dollar of purchase value.
Not my experience. Current Qwen Coder is noteworthy but still far from good. Can't compare them with current commercial offerings, it is just different leagues.
> Don't review it, just make sure the feature is there!
Bad idea. Use another agent to do automatic review. (And a third agent writing tests.)
Don't forget the architecting and orchestrating agent too!
Multiple agents with different frontier models for best results. Claude code/codex shops don’t know what they’re missing if they never let Gemini roast their designs, code and formal models.
This.
Claude Code wrote a blog article for me documenting a Gemini interaction that I manually operated. I found it quite interesting - the difference in "personalities", and the quality of Claude's output is stark in comparison to the Gemini's.
But still, best to have two sets of eyes.