I agree that may be the charitable interpretation. But so often these points are phrased in a way that directly attempts to dismiss someone else's concerns ("the real danger with AI isn't the thing you're worried about, it's the thing I'm worried about!"). I feel like they shouldn't be doing that unless they're going to present some kind of reasoning that supports both pillars of that claim.
There's nothing stopping them from simply saying "an under-recognized problem with AI is Y, let me explain why it should be a concern".
Expressing the article as an attack on X, when in fact the author hasn't even put five minutes of thought into evaluating X, is just a click bait strategy. People go in expecting something that undermines X, but they don't get it.