The naysayers said we’d never even get to this point. It’s far more plausible to me that AI will advance enough to de-slopify our code than it is to me that there will be some karmic reckoning in which the graybeards emerge on top again.
The naysayers said we’d never even get to this point. It’s far more plausible to me that AI will advance enough to de-slopify our code than it is to me that there will be some karmic reckoning in which the graybeards emerge on top again.
What point have we reached? All I see is HN drowning in insufferable, identical-sounding posts about how everything has changed forever. Meanwhile at work, in a high stakes environment where software not working as intended has actual consequences, there are... a few new tools some people like using and think they may be a bit more productive with. And the jury's still out even on that.
The initial excitement of LLMs has significantly cooled off, the model releases show rapidly diminishing returns if not outright equilibrium and the only vibe-coded software project I've seen get any actual public use is Claude Code, which is riddled with embarrassing bugs its own developers have publicly given up on fixing. The only thing I see approaching any kind of singularity is the hype.
I think I'm done with HN at this point. It's turned into something resembling moltbook. I'll try back in a couple of years when maybe things will have changed a bit around here.
> I think I'm done with HN at this point.
On the bright side, this forum is gonna be great fun to read in 2 or 3 years, whether the AI dream takes off, or crashes to the ground.
I do not await the day where the public commons is trashed by everyone and their claudebot, though perhaps the segmentation of discourse will be better for us in the long run given how most social media sites operate.
Same as it was for "blockchain" and NFTs. Tech "enthusiasts" can be quite annoying, until whatever they hype is yesterday's fad. Then they jump on the next big thing. Rinse, repeat.
> The initial excitement of LLMs has significantly cooled off, the model releases show rapidly diminishing returns if not outright equilibrium and the only vibe-coded software project I've seen get any actual public use is Claude Code, which is riddled with embarrassing bugs its own developers have publicly given up on fixing. The only thing I see approaching any kind of singularity is the hype.
I am absolutely baffled by this take. I work in an objectively high stakes environment (Big 3 cloud database provider) and we are finally (post Opus 4.5) seeing the models and tools become good enough to drive the vast majority of our coding work. Devops and livesite is a harder problem, but even there we see very promising results.
I was a skeptic too. I was decently vocal about AI working for single devs but could never scale to large, critical enterprise codebases and systems. I was very wrong.
> I work in an objectively high stakes environment (Big 3 cloud database provider) and we are finally (post Opus 4.5) seeing the models and tools become good enough to drive the vast majority of our coding work
Please name it. If it’s that good, you shouldn’t be ashamed of doing so and we can all judge by ourselves how the quality of the service evolves.
I am not in a high stakes environment and work on a one-person size projects.
But for months I have almost stopped writing actual lines of code myself.
Frequency and quality of my releases had improved. I got very good feedback on those releases from my customer base, and the number of bugs reported is not larger than on a code written by me personally.
The only downside is that I do not know the code inside out anymore even if i read it all, it feels like a code written by co-worker.
Feels like code written by a co-worker. No different than working on any decent sized code-base anywhere.
I've stopped writing code too. Who the fuck wants to learn yet ANOTHER new framework. So much happier with llm tools.
It's no coincidence HN is hosted by a VC. VC-backed tech is all about boom-bust hype cycles analogous to the lever pull of a giant slot machine.
You have your head in the sand. Anyone making this claim in 2026 hasn’t legitimately tried these tools.
I make mission critical software for robust multi robotic control in production flying real robots every day
16% of our production codebase is generated from claude or another LLM
Just because you can’t do it doesn’t mean other people can’t
Denial is a river
CTO at Gambit AI? How generous of you to talk your book while insulting us. At least we know what to avoid.
My guess: Their UASs run modified PX4 firmware.
Do we make UAS’?
Please tell me more
Yikes.
The excitement hasn't cooled off where I'm working.
Honestly, I'm personally happy to see so many naysayers online, it means I'm going to have job security a little longer than you folks.
The AI agents can ALREADY "de-slopify" the code. That's one of the patterns people should be using when coding with LLMs. Keep an agent that only checks for code smells, testability, "slop", scalability problems, etc. alongside whatever agent you have writing the actual code.
> The naysayers said we’d never even get to this point. It’s far more plausible to me that AI will advance enough to de-slopify our code than it is to me that there will be some karmic reckoning in which the graybeards emerge on top again.
"The naysayers"/"the graybeards" have never been on top.
If they had been, many of the things the author here talks about getting rid of never would've been popular in the first place. Giant frameworks? Javascript all the things? Leftpad? Rails? VBA? PHP? Eventually consistent datastores?
History is full of people who successfully made money despite the downsides of all those things because the downsides usually weren't the most important thing in the moment of building.
It's also full of people who made money cleaning it all up when the people who originally built it didn't have time to deal with it anymore. "De-slopify" is going to be a judgment question that someone will need to oversee, there's no one-size-fits-all software pattern, and the person who created the pile of code is unlikely to be in a position to have time to drive that process.
Step 1: make money with shortcuts
Step 2: pay people to clean up and smooth out most of those shortcuts
I've bounced between both roles already a lot due to business cycles of startup life. When you're trying to out-scale your competitor you want to find every edge you can, and "how does this shit actually work" is going to be one of those edges for making the best decisions about how to improve cost/reliability/perf/usability/whatever. "It doesn't matter what the code looks like" is still hard to take seriously compared to the last few iterations of people pitching tools claiming the same. The turnaround loop of modifying code is faster now; the risk of a tar-pit of trying to tune on-the-fly a pile of ill-fitting spaghetti is not. It's gonna be good enough for a lot of people, Sturgeon's law - e.g. most people aren't great at knowing what usefully-testable code looks like. So let's push past today's status quo of software.
If I was working on a boring product at a big tech co I'd be very worried, since many of those companies have been hiring at high salaries for non-global-impact product experiments that don't need extreme scale or shipping velocity. But if you want to push the envelope, the opportunity to write code faster should be making you think about what you can do with it that other people aren't yet. Things beyond "here's a greenfield MVP of X" or "here's a port of Y."