Is it just me or has there been a wave of delusional people on Hacker News completely neglecting new advancements in technology? The two most common technologies I see having this type of discourse are AI coding and containers.
Either everyone here is a low level quantum database 5D graphics pipeline developer with a language from the future that AI hasn't yet learned, or some people are in denial.
I'm primarily an embedded firmware developer. Gas/electric power products. Ada codebase, so it's off the beaten path but nothing academic by any stretch of the imagination. I have a comprehensive reference manual that describes exactly how the language should be working, and don't need an LLM to regurgitate it to me. I have comprehensive hardware and programming manuals for the MCUs I program that describe exactly how the hardware should be working, and don't need and LLM to regurgitate it to me. I actually really specifically don't want the information transformed, it is engineered to be the way it is, and to change its presentation strips it of a lot of its power.
I deal with way too much torque and way too much electrical energy to trust an LLM. Saving a few minutes here and there isn't worth blowing up expensive prototypes or getting hurt over.
Software development is a spectrum and you're basically on the polar opposite end of the one AI is being used for: sloppy web dev.
I would be willing to live and let live for the sake of being practical, if the tolerance for (and even active drive towards) low quality slop didn't keep pushing further and further into places it shouldn't. People that accept it in sloppy web dev will accept it in fairly important line of business software. People that accept it in fairly important line of business software will accept it in IT infrastructure. People that accept it in IT infrastructure will accept it in non-trivial security software. People that accept it in non-trivial security software will accept it in what should be a high-integrity system, at which point real engineers or regulatory bodies hopefully step in to stop the bullshit. When asked, everybody will say they draw the line at security, but the drive towards Worse knows no bounds. It's why we see constant rookie mistakes in every IoT device imaginable.
My actual idealistic position, discounting the practicality, is that it shouldn't be tolerated anywhere. We should be trying to minimize the amount of cheap, born-to-die, plasticy shit in society, not maximize it. Most people going on about "muh feature velocity" are reinventing software that has existed for decades. The next shitty UI refresh for Android or Windows, or bad firmware update for whatever device is being screwed up for me, will leave me just as unhappy as the last. The sprint was indeed completed on time, but the product still sucks.
A guided missile should obviously not miss its target. An airliner should obviously never crash. An ERP system should obviously never screw up accounting, inventory, etc, although many people will tolerate that to an unreasonable degree. But my contention is that a phone or desktop's UI should never fail to function as described. A "smart" speaker should never fail to turn on or be controlled. A child's toy should never fail to work in the circumstances they would play with it.
If it's going to constantly fuck up and leave me unhappy and frustrated, why was it made? Why did I buy it? AI could have brought it to market faster, but for what? Once I noticed this, I did just quit buying/dealing with this junk. I'm an ideologue and maybe even a luddite, but I just don't need that bad juju on my soul. I use and write software that's worth caring about.
The consequences of incorrect code can be severe outside of front-end web development. For front-end web development, if the code is wrong, you see from your browser that your local web app is broken and try to fix it, or ship it anyway if it's a minor UI bug. For critical backend systems, subtle bugs are often discovered in downstream systems by other teams, and can result in financial loss, legal risk, reputational damage, or even loss of life.
It’s totally valid to see a new piece of tech, try it, say it’s not for you, and move on. With LLMs it feels forced-fed, and simply saying “eh I’m good, no thanks” isn’t enough. Lots of hype and headlines on how it’s going to take our jobs and replace us, pressure from management to adopt it.
Some new trends make perfect sense to me and I’ll adopt them. I’ve let some pass me by and rarely regretted it. That doesn’t make me a luddite.
I think it’s just backlash against all the AI hype - I get it, im tired of hearing about it too, but - it’s already here to stay, it’s been that way for years now - it’s a normal part of development now for most people, the same as any new tool that becomes the industry darling. Learn to like it or at least learn it, but the reality is here whether you like it or not.
The gatekeepers are witnessing the gate opening up more and letting more people in and they don't like that at all.