This was weak.
The author's main counter-argument: We have control in the development and progress of AI; we shouldn't rule out positive outcomes.
The author's ending argument: We're going to build it anyway, so some of us should try and build it to be good.
The argument in this post was a) not very clear, b) not greatly supported and c) a little unfocused.
Would it persuade someone whose mind is made up that AGI will destroy our world? I think not.
> a) not very clear, b) not greatly supported and c) a little unfocused.
Incidentally this was why I could never get into LessWrong.
The longer the augment, the more time and energy it takes to poke holes in it.