At least in the case of the researchers I mentioned, they have a deeply held, genuine belief that AI will, in the very near term, exceed humans in all intellectual capabilities, and that poses a bigger risk to human existence than humans simply fucking things up (beyond the fuck up of competently building a superior being). I would bet that most of them believe that us being paperclipped is a more likely bad outcome than a dystopia arising from human control. Simply because a human dystopia takes time to implement, even when aided by AI, which is time we don't have.