From the recent New Yorker piece on Sam:
“My vibes don’t match a lot of the traditional A.I.-safety stuff,” Altman said. He insisted that he continued to prioritize these matters, but when pressed for specifics he was vague: “We still will run safety projects, or at least safety-adjacent projects.” When we asked to interview researchers at the company who were working on existential safety—the kinds of issues that could mean, as Altman once put it, “lights-out for all of us”—an OpenAI representative seemed confused. “What do you mean by ‘existential safety’?” he replied. “That’s not, like, a thing.”
No chance an openAI spokesperson doesnt know what existential safety is
I did not read the response as...
>Please provide the definition of Existential Safety.
I read:
>Are you mentally stable? Our product would never hurt humanity--how could any language model?
The absolute gall of this guy to laugh off a question about x-risks. Meanwhile, also Sam Altman, in 2015: "Development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity. There are other threats that I think are more certain to happen (for example, an engineered virus with a long incubation period and a high mortality rate) but are unlikely to destroy every human in the universe in the way that SMI could. Also, most of these other big threats are already widely feared." [1]
[1] https://blog.samaltman.com/machine-intelligence-part-1
Why are these people always like this.
Amusing! Even if they believe that, they should know the company communicated the opposite earlier.