Some people actually don't really like to learn new things. If the machine spits out plausible working code, they'd be perfectly happy with that. Personally I think AI is doing a lot more harm than good and I can't wait for the bubble to burst.
Some people actually don't really like to learn new things. If the machine spits out plausible working code, they'd be perfectly happy with that. Personally I think AI is doing a lot more harm than good and I can't wait for the bubble to burst.
I don’t think it’s going to burst like how other people expect. The technology is already out there, when it loses steam people aren’t suddenly going to stop using it. I predit it’ll be more like the dot come crash where companies that can survive the downturn come out dominant.
It ends like this: all codebases become unmaintainable spaghetti after agentic AI spends years on it. Then after every agent in existence will spend minimum 24 hours reading the codebase to add a simple feature, the software is abandoned.
I believe most codebases were "unmaintainable spaghetti" even before LLMs: depends on how you define it though.
To be, it means expensive to evolve.
Let those who want to learn go learn. And let those who just want something that works well enough without having to learn get it.
To use an analogy, LLMs are like the Ring of Power in Lord of the Rings. The Ring of Power does not corrupt one nor does it magically turn one evil. Rather, the Ring just serves as a catalyst for what is already inside the bearer.
Many that wore the Ring had pure and righteous intentions. The thought of, "If I were in power, I would..." was the arrogance and corruption which the Ring amplifies.
So, I cannot agree that it is AI doing the harm. Rather, AI just gives us the power to do the harm, the shortcuts, the cheats, etc. we have always desired. And just like the Ring, I believe much of the harm from LLMs often comes from people that started with good intentions, and the power it grants is just too tempting for many.