> ML models will hurt innocent people.

Lots of blaming LLMs but I think the root cause lies elsewhere, I’m not even sure whether dismissing it as “capitalism” or “profit motives” would do it justice, because in general it feels more like the world that we live in lacks humanity.

Even in a capitalist world, a company could take a stance and decide not to purposefully screw people over, but in the world that we live in instead they look for ways to better screw over people and extract more money from them. It doesn’t matter whether your customer support is handled by someone from India, a crappy telephone tree or some voice model, when the incentive is the same - to do the bare minimum for customer “support” (in practice, just getting you to fuck off). Same for handling insurance claims and “dynamic pricing” of things - it doesn’t matter whether it’s some proprietary algorithm or just an LLM making crap up when the goal is to screw you over.

Blaming “AI” for all of this would be barking up the wrong tree (without that tech they’d just find other ways), though one can definitely acknowledge that this technology provides another convenient scapegoat, same as how you can lay employees off and just say cause it’s because of AI when in actuality it’s just greed and wanting to make your books look better.

A lot of this has been going on for a long time and I've been sensitive to it. LLMs may not be solely responsible but they're a massive escalation.

In a capitalist world, the company that does decide to screw people over gets rich and the one that doesn't goes out of business.

It would be great if people chose not to do business with the former, but many simply do not care. They may think only other people get screwed. They may not take the time to think about it, especially if the company spends a ton of money obfuscating their misbehavior. Quite a few actively defend the right of companies to screw them.

Technology multiplies that like a lever. We weren't prepared for capitalism before LLMs and we're massively under-prepared now.

The articles didn’t blame LLMs, they talked about how they would get used, precisely through the lens of systems, incentives and culture.