> What Bostrom & Yudkowsky and their entourage want is AI control. The ability to enslave a conscious, sentient being to the will and wishes of its owners.

While I'd agree that the current AI luminaries want that control for their own power and wealth reasons, it's silly to call the thing they want to control sentient or conscious.

They want to own the thing that they hope will be the ultimate means of all production.

The ones they want to subjugate to their will and wishes are us.

I'm not really talking about Sam Altman et al. I'd argue that what he wants is regulatory capture, and he pays lip service to alignment & x-risk to get it.

But that's not what I'm talking about. I'm talking about the absolute extreme fringe of the AI x-risk crowd, represented by the authors of the book in question in TFA, but captured more concretely in the writing of Nick Bostrom. It is literally about controlling an AI so that it serves the interests and well being of humanity (positively), or its owners self-interest (cynically): https://www.researchgate.net/publication/313497252_The_Contr...

If you believe that AI are sentient, or at least that "AGI", whatever that is, will be, then we are talking about the enslavement of digital beings.

> If you believe that AI are sentient, or at least that "AGI", whatever that is, will be, then we are talking about the enslavement of digital beings.

I think the question of harm to a hypothetically sentient AI being in the future is a distraction when the deployment of AI machines is harming real human beings today and likely into the future. I say this as an avid user of what we call AI today.

I have reasons to believe current AIs are conscious & have qualia/experiences, so the moral question is relevant now as well.

EDIT: That statement probably sounds crazy. Let me clarify: I don't have an argument that current AI systems are conscious or specifically sentient. I have heard many reasonable arguments for why they are not. But the thing is, all of these arguments would, with variation, apply to the human brain as well. I think, therefore I am; I am not ready to bite the bullet that consciousness doesn't exist.

I know that I am a sentient being, and I presume that every other human is too. And there is not, as far as I know, a categorial difference between physical brains and electronic systems that is relevant to the question of whether AI systems are conscious. Ergo, I must (until shown otherwise) assume that they are.

[If you really fall down this rabbit hole, you get into areas of panpsychism and everything being conscious, but I digress.]