> The change occurred because LLMs are useful for programming in 2025
But the skeptics and anti-AI commenters are almost as active as ever, even as we enter 2026.
The debate about the usefulness of LLMs has grown into almost another culture war topic. I still see a constant stream of anti-AI comments on HN and every other social platform from people who believe the tools are useless, the output is always unusable, people who mock any idea that operator skill has an impact on LLM output, or even claims that LLMs are a fad that will go away.
I’m a light LLM user ($20/month plan type of usage) but even when I try to share comments about how I use LLMs or tips I’ve discovered, I get responses full of vitriol and accusations of being a shill.
It absolutely is culture war. I can easily imagine a less critical version of myself having ended up in that camp. It comes across to me that the perspective is informed by core values and principles surrounding what "intelligence" is.
I butted heads with many earlier on, and they did nothing to challenge that frame meaningfully. What did change is my perception of the set of tasks that don't require "intelligence". And the intuition pump for that is pretty easy to start — I didn't suppose that Deep Blue heralded a dawn of true "AI", either, but chess (and now Go) programs have only gotten even more embarrassingly stronger. Even if researchers and puzzle enthusiasts might still find positions that are easier for a human to grok than a computer.
> from people who believe the tools are useless, the output is always unusable, people who mock any idea that operator skill has an impact on LLM output
You are attacking a strawman. Almost nobody claims that LLMs are useless or you can never use their output.
Those claims are all throughout this thread and in replies to my comments.
It’s not a strawman. It’s everywhere on HN.
Such as? Currently, the top comments are
> LLMs have certainly become extremely useful for Software Engineers
> LLMs are useful for programming in 2025
> Do LLMs make bad code: yes all the time (at the moment zero clue about good architecture). Are they still useful: yes, extremely so.
If your comment is not a strawman, show me where people actually claim what you say they do.
Its simple. Given the trajectory of these things people feel under threat and defend themselves accordingly. They say what they hope for given a number of factors (bad workplaces generating slop they have to deal with, job losses, identity redefinition, etc). You know the things that happen when a profession is disrupted in a capitalist system where 'what you do' is often tied up with identity, status, and livelihood.
People will go from skeptic to dread/anxiety, to either acceptance or despair. We are witnessing the disruption of a profession in real time and it will create a number of negative effects.
"Useful for programming" is a massive and dishonest bait and switch.
Lots of things are "useful for programming". Switching to a comfier chair is more useful for programming than any LLM.
We were sold vibe coding, and that's what managers want.