What point have we reached? All I see is HN drowning in insufferable, identical-sounding posts about how everything has changed forever. Meanwhile at work, in a high stakes environment where software not working as intended has actual consequences, there are... a few new tools some people like using and think they may be a bit more productive with. And the jury's still out even on that.
The initial excitement of LLMs has significantly cooled off, the model releases show rapidly diminishing returns if not outright equilibrium and the only vibe-coded software project I've seen get any actual public use is Claude Code, which is riddled with embarrassing bugs its own developers have publicly given up on fixing. The only thing I see approaching any kind of singularity is the hype.
I think I'm done with HN at this point. It's turned into something resembling moltbook. I'll try back in a couple of years when maybe things will have changed a bit around here.
> I think I'm done with HN at this point.
On the bright side, this forum is gonna be great fun to read in 2 or 3 years, whether the AI dream takes off, or crashes to the ground.
I do not await the day where the public commons is trashed by everyone and their claudebot, though perhaps the segmentation of discourse will be better for us in the long run given how most social media sites operate.
Same as it was for "blockchain" and NFTs. Tech "enthusiasts" can be quite annoying, until whatever they hype is yesterday's fad. Then they jump on the next big thing. Rinse, repeat.
> The initial excitement of LLMs has significantly cooled off, the model releases show rapidly diminishing returns if not outright equilibrium and the only vibe-coded software project I've seen get any actual public use is Claude Code, which is riddled with embarrassing bugs its own developers have publicly given up on fixing. The only thing I see approaching any kind of singularity is the hype.
I am absolutely baffled by this take. I work in an objectively high stakes environment (Big 3 cloud database provider) and we are finally (post Opus 4.5) seeing the models and tools become good enough to drive the vast majority of our coding work. Devops and livesite is a harder problem, but even there we see very promising results.
I was a skeptic too. I was decently vocal about AI working for single devs but could never scale to large, critical enterprise codebases and systems. I was very wrong.
> I work in an objectively high stakes environment (Big 3 cloud database provider) and we are finally (post Opus 4.5) seeing the models and tools become good enough to drive the vast majority of our coding work
Please name it. If it’s that good, you shouldn’t be ashamed of doing so and we can all judge by ourselves how the quality of the service evolves.
> you shouldn’t be ashamed of doing so and we can all judge by ourselves how the quality of the service evolves.
That's kinda my bar at this point. On YouTube, there are so many talks and other videos about people using technology X to build Y software or managing Z infrastructure. But here all we got is slop, toys that should have been a shell script, or vague claims like GP.
Even ed(1) is more useful that what has been presented so far.
I am not in a high stakes environment and work on a one-person size projects.
But for months I have almost stopped writing actual lines of code myself.
Frequency and quality of my releases had improved. I got very good feedback on those releases from my customer base, and the number of bugs reported is not larger than on a code written by me personally.
The only downside is that I do not know the code inside out anymore even if i read it all, it feels like a code written by co-worker.
Feels like code written by a co-worker. No different than working on any decent sized code-base anywhere.
I've stopped writing code too. Who the fuck wants to learn yet ANOTHER new framework. So much happier with llm tools.
It's no coincidence HN is hosted by a VC. VC-backed tech is all about boom-bust hype cycles analogous to the lever pull of a giant slot machine.
You have your head in the sand. Anyone making this claim in 2026 hasn’t legitimately tried these tools.
I make mission critical software for robust multi robotic control in production flying real robots every day
16% of our production codebase is generated from claude or another LLM
Just because you can’t do it doesn’t mean other people can’t
Denial is a river
CTO at Gambit AI? How generous of you to talk your book while insulting us. At least we know what to avoid.
My guess: Their UASs run modified PX4 firmware.
Do we make UAS’?
Please tell me more
Yikes.
The excitement hasn't cooled off where I'm working.
Honestly, I'm personally happy to see so many naysayers online, it means I'm going to have job security a little longer than you folks.