Not sure I understand the last sentence:
> The fundamental challenge in AI for the next 20 years is avoiding extinction.
I think he's referring to AI safety.
https://lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-lis...
For a perhaps easier to read intro to the topic, see https://ai-2027.com/
or read your favorite sci-fi novel, or watch Terminator. this is pure bs by a charlatan
It's a tell that he's been influenced by rationalist AI doomer gurus. And a good sign that the rest of his AI opinions should be dismissed.
He's referring to humanity, I believe
It's ambiguous. It could go the other way. He could be referring to that oldest of science fiction tropes: The Bulterian Jihad, the human revolt against thinking machines.
Meh. I think the more likely scenario is the financial extinction of the AI companies.
I think he's referring to AI safety.
https://lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-lis...
For a perhaps easier to read intro to the topic, see https://ai-2027.com/
or read your favorite sci-fi novel, or watch Terminator. this is pure bs by a charlatan
It's a tell that he's been influenced by rationalist AI doomer gurus. And a good sign that the rest of his AI opinions should be dismissed.
He's referring to humanity, I believe
It's ambiguous. It could go the other way. He could be referring to that oldest of science fiction tropes: The Bulterian Jihad, the human revolt against thinking machines.
Meh. I think the more likely scenario is the financial extinction of the AI companies.