When someone says something that I think is poorly framed, I often reframe it and speak to that instead. (Lots of people do this, even if they don’t realize it. I’m aware that I do, for better and worse, and I still prefer it; I think it is more authentic. I think some of the best ways we can enrich other people’s lives is by sharing different ways of processing the world. Lots of people get locked into pretty uninteresting narratives.)
So reframe I did. (I don’t think those articles you cited are worth any more attention than I’ve already given them.)
My most blunt editorializing would be this: most people would be better grounded if they read AI alignment and safety books by Stuart Russell, Nick Bostrom, Brian Christian, Eliezer Yudkowsky, and Nate Soares. If you’ve read others that you recommend, please let me know. I’ve read many that I don’t usually recommend.
As far as long form articles, I recommend Paul Christiano, Zvi Moshowitz, as well as anyone with the fortitude to make predictions while sharing their models (like the AI 2027 crew).
I recommend browsing “Best of Year Y” (or whatever they are called) articles on the AI Alignment Forum and LessWrong. They are my go-tos for smart & informed writing on AI. For posts that have more than say 100 votes, the quality bar is tremendously higher than almost anywhere else I’ve seen, including mainstream sources with great reputations.
In conclusion, I would rather point to interesting people to read and places to engage.