In the context of AI research, there is no question that "existential" means "powerful AI literally kills every human being". It's a mainstream although not universal view among experts in the space that this is a serious possibility.
In the context of AI research, there is no question that "existential" means "powerful AI literally kills every human being". It's a mainstream although not universal view among experts in the space that this is a serious possibility.
That's not my point. My point is the moralizing and worshipping around it.
For example - by powerful, do you mean a mass government surveillance system? That can be implemented by AI of today right now, even if AI stagnated.
It's the argument where oh, AI is just a superset of all humans, humans are dumb and don't even know themselves, we should just submit esque attitude that I'm talking about.
The easiest way to solve a problem is to dissolve it, and say it doesn't actually matter. If you start from the position that humans are useless and don't matter, then sure, you can get absurdities like Roko's basilisk.
If humanity fails, the reason will almost certainly be that first and foremost, people stopped caring about human problems and deemed them too stupid to understand themselves, not because AI is, in some objective sense, a superset of all human capability and thus morally deserves to come out on top.
At least in the case of the researchers I mentioned, they have a deeply held, genuine belief that AI will, in the very near term, exceed humans in all intellectual capabilities, and that poses a bigger risk to human existence than humans simply fucking things up (beyond the fuck up of competently building a superior being). I would bet that most of them believe that us being paperclipped is a more likely bad outcome than a dystopia arising from human control. Simply because a human dystopia takes time to implement, even when aided by AI, which is time we don't have.
By "powerful", I mean a system whose operations humans cannot control or prevent or even reason about, in the same way that the members of an anthill can't do anything about a construction crew dumping concrete on them to lay a sidewalk. It's got nothing to do with "should submit" or "morally deserves". If the AI system in question is capable enough, it simply won't matter any longer what any human being thinks should happen. (In principle, it also has to be autonomous; in practice, I think OpenClaw has clearly illustrated that any AI system is going to be granted autonomy by someone.)