> because I do what (little) I can to slow down AI.

I think you're focusing on the wrong things. AI can be used in harmful ways, but not because they're outsmarting human beings despite all the cult-like hype. In fact, they don't need to be actually competent for the rich to take advantage of the tech in destructive ways. They just need to convince the public that they're competent enough so that they have an excuse to cut jobs. Even if AI does a poorer job, it won't matter if consumers don't have alternatives, which is unfortunately the case in many situations. We face a much bigger threat of data breaches from vibe coded apps than conscious robots manipulating humans through the Matrix.

Just look at Google support. It's a bunch of mindless robots that can kick you out of their platform on a whim. Their "dispute process" is another robot that passive-aggressively ragebaits you. [1][2] They're incompetent, yet it helps one of the richest companies in the world save money.

Also, let's not forget Google's AI flagged multiple desperate parents sharing medical pics of their kids to their doctors. Only when the media contacted them did a human being come out, only to falsely accuse the parents of being pedos. [3] People were harmed, and it's not because of competency.

Another greater concern is the ability of LLMs to mass-produce spam or troll content with minimal effort. It's a major threat to democracies all around the globe, and it turns out we don't need a superintelligence for demagogues to misuse it and cause harm.

There are more real concerns regarding AI other than the perpetually "just around the corner" superintelligence. What we need is a push for stronger regulatory protection for workers, consumers, and constituents. Not boycotting Macbooks because of AI.

[1]: https://news.ycombinator.com/item?id=26061935

[2]: https://news.ycombinator.com/item?id=23219427

[3]: https://news.ycombinator.com/item?id=32538805