More on the periphery than an insider, but I personally know researchers in all three major labs who were there long before GPT-3. They all care about existential safety, a lot. In the sense that they believe there's a meaningful chance all humans are dead a decade from now (and that that's a bad thing; unfortunately, there are also people deeply involved who don't think human extinction is a bad thing).
The issue is that they're embedded in capitalism, and that drives the labs to push further and faster than is responsible. They (and unfortunately us) end up in a race where no individual feels like they can back off or halt, because if they do, they will be destroyed.
> unfortunately, there are also people deeply involved who don't think human extinction is a bad thing
You mean at the top labs? Since when isn't that level of misanthropy categorized as having mental health issues?
See e.g. Richard Sutton, who, although not at a top lab, is certainly a very important figure in the field.
Or, if you want someone with concrete influence at a top lab, Larry Page.
/rant
Existential in what sense?
There's this one sense in which people are almost moral about it: "yup, AI is just superior to humans, nothing we can do about it."
And then there's ones where the elite class implements mass surveillance and warfare and obsoletes billions of humans of their own volition. These AI are already capable enough right now to execute on said plan (of course, with proper evil engineering)
There's two ways to "win". One is in an absolute or platonic sense - one that cares about things like values, even in the presence of extreme pushback. The other is in a darwinian sense. No, not in the meme way that again, feeds back into the narrative of "the things that survive are smarter". The things that survive, survive. It doesn't matter how it gets there.
I can agree with the second way. But it gets smuggled in as the first way, almost as an attempt to crush any and all resistance preemptively.
AI doesn't need to say, be capable of pushing the frontier of quantum mechanics to be lethal.
/endrant
Sorry, not really related to your comment, just had to get it out there.
In the context of AI research, there is no question that "existential" means "powerful AI literally kills every human being". It's a mainstream although not universal view among experts in the space that this is a serious possibility.
That's not my point. My point is the moralizing and worshipping around it.
For example - by powerful, do you mean a mass government surveillance system? That can be implemented by AI of today right now, even if AI stagnated.
It's the argument where oh, AI is just a superset of all humans, humans are dumb and don't even know themselves, we should just submit esque attitude that I'm talking about.
The easiest way to solve a problem is to dissolve it, and say it doesn't actually matter. If you start from the position that humans are useless and don't matter, then sure, you can get absurdities like Roko's basilisk.
If humanity fails, the reason will almost certainly be that first and foremost, people stopped caring about human problems and deemed them too stupid to understand themselves, not because AI is, in some objective sense, a superset of all human capability and thus morally deserves to come out on top.
At least in the case of the researchers I mentioned, they have a deeply held, genuine belief that AI will, in the very near term, exceed humans in all intellectual capabilities, and that poses a bigger risk to human existence than humans simply fucking things up (beyond the fuck up of competently building a superior being). I would bet that most of them believe that us being paperclipped is a more likely bad outcome than a dystopia arising from human control. Simply because a human dystopia takes time to implement, even when aided by AI, which is time we don't have.
By "powerful", I mean a system whose operations humans cannot control or prevent or even reason about, in the same way that the members of an anthill can't do anything about a construction crew dumping concrete on them to lay a sidewalk. It's got nothing to do with "should submit" or "morally deserves". If the AI system in question is capable enough, it simply won't matter any longer what any human being thinks should happen. (In principle, it also has to be autonomous; in practice, I think OpenClaw has clearly illustrated that any AI system is going to be granted autonomy by someone.)