HN has a base of strong anti-AI bias, I assume is partially motivated by insecurity over being replaced, losing their jobs or having missed the boat on the AI.
HN has a base of strong anti-AI bias, I assume is partially motivated by insecurity over being replaced, losing their jobs or having missed the boat on the AI.
I use AI every day. Without oversight, it does not work well.
If it doesn't work well, I will do it myself, because I care that things are done well.
None of this is me being scared of being replaced; quite the opposite. I'm one of the last generations of programmers who learned how to program and can debug and fix the mess your LLM leaves behind when you forgot to add "make sure it's a clean design and works" to the prompt.
Okay, that's maybe hyperbole, but sadly only a little bit. LLMs make me better at my job, they don't replace me.
Based on the comments here, it's surprisingly anything in society works at all. I didn't realize the bar was "everything perfect every time, perfectly flexible and adaptable". What a joy some of these folks must be to work with, answering every new technology with endless reasons why it's worthless and will never work.
I think perhaps you underestimate how antithetical the current batch of LLM AI's is to what most programmers strive for every day, and what we want from our tools. Its not about losing our job, its about "correctness". (or as said below - deterministic)
In a lot of jobs, particularly in creative industries, or marketing, media and writing, the definition of a job well done is a fairly grey area. I think AI will be mostly disruptive in these areas.
But in programming there is a hard minimum of quality. Given a set of inputs, does the program return the correct answer or not? When you ask it what 2+2, do you get 4?
When you ask AI anything, it might be right 50% of the time, or 70% of the time, but you can't blindly trust the answer. A lot of us just find that not very useful.
I am a SWE myself and use LLMs to write ~100% of my code. That does not mean I fire and forget multiplexed codex instances. Many times I step through and approve every edit. Even if it was nothing but a glorified stenographer - there are substantial time savings in being able to prototype and validate ideas quickly.
> But in programming there is a hard minimum of quality. Given a set of inputs, does the program return the correct answer or not? When you ask it what 2+2, do you get 4?
Whether something works or not matters less than whether someone will pay for it.
Modt of the time when using AI I have a lot more than 1 shot to ensure everything is correct.
> HN has a base of strong anti-AI bias
HN constantly points out the flaws, gaps, and failings of AI. But the same is true of any technology discussed on HN. You could describe HN as having an anti-technology bias, because HN complains about the failings of tech all day every day.
> HN has a base of strong anti-AI bias
Quite the opposite, actually. You can always find five stories on the front page about some AI product or feature. Meanwhile, you have people like yourself who convince themselves that any pushback is done by people who just don't see the true value of it yet and that they're about to miss out!! Some kind of attempt at spreading FOMO, I guess.
> HN has a base of strong anti-AI bias
If anything, HN, has a pro-AI bias. I don't know of any other medium where discussions about AI consistently get this much frontpage time, this amount of discussion, and this many people reporting positive experiences with it. It's definitely true that HN isn't the raging pro-AI hypetrain it was two years ago, but that shouldn't be mistaken for "strong anti-AI bias".
Outside of HN I am seeing, at best, an ambivalent reaction: plenty of people are interested, almost everyone tried it, very few people genuinely like it. They are happy to use it when it is convenient, but couldn't care less if it disappeared tomorrow.
There's also a small but vocal group which absolutely hates AI and will actively boycott any creative-related company stupid enough to admit to using it, but that crowd doesn't really seem to hang out on HN.
>but couldn't care less if it disappeared tomorrow.
Wonder how true that is. Some things incorporate in your life so subtly that you only become aware of them when totally switched off.
> There's also a small but vocal group which absolutely hates AI and will actively boycott any creative-related company stupid enough to admit to using it, but that crowd doesn't really seem to hang out on HN.
I do, but I certainly feel in the minority in here.
I really don’t think this is accurate. I think the median opinion here is to be suspicious of claims made about AI, and I don’t think that’s necessarily a bad thing. But I also regularly see posts talking about AI positively (e.g. simonw), or talking about it negatively. I think this is a good thing, it is nice to have a diversity of opinions on a technology. It's a feature, not a bug.
HN has an obsession with quality too, which has merit, but is often economically irrelevant.
When US-East-1 failed, lots of people talked about how the lesson was cloud agnosticism and multi cloud architecture. The practical economic lesson for most is that if US-East-1 fails, nobody will get mad at you. Cloud failure is viewed as an act of god.
Anti-AI bias is motivated by the waste of natural resources due to a handful of non-technical douchebag tech bros.
Everything isn't about money, I know that status and power are all you ai narcissists dream about. But you'll never be Bill Gates, nor will you be Elon Musk.
Once ai has gone the way of "Web3", "NFTs", "blockchain", "3D tvs", etc; You'll find a new grift to latch your life savings onto.