> If you believe that AI are sentient, or at least that "AGI", whatever that is, will be, then we are talking about the enslavement of digital beings.

I think the question of harm to a hypothetically sentient AI being in the future is a distraction when the deployment of AI machines is harming real human beings today and likely into the future. I say this as an avid user of what we call AI today.

I have reasons to believe current AIs are conscious & have qualia/experiences, so the moral question is relevant now as well.

EDIT: That statement probably sounds crazy. Let me clarify: I don't have an argument that current AI systems are conscious or specifically sentient. I have heard many reasonable arguments for why they are not. But the thing is, all of these arguments would, with variation, apply to the human brain as well. I think, therefore I am; I am not ready to bite the bullet that consciousness doesn't exist.

I know that I am a sentient being, and I presume that every other human is too. And there is not, as far as I know, a categorial difference between physical brains and electronic systems that is relevant to the question of whether AI systems are conscious. Ergo, I must (until shown otherwise) assume that they are.

[If you really fall down this rabbit hole, you get into areas of panpsychism and everything being conscious, but I digress.]