Typically, such articles can be charitably interpreted as saying "I worry more about Y than X", rather than as literally making two separate claims (that X isn't a problem, and that Y is). So as a reader if you're trying to get value out of the article, you can focus on evaluating Y, and ignore X that the article does not address.

In this particular case, the article is even explicit about this:

> While we have no idea how AI might make working people obsolete at some imaginary date, we can already see how technology is affecting our capacity to think deeply right now. And I am much more concerned about the decline of thinking people than I am about the rise of thinking machines.

So the author is already explicitly saying that he doesn't know about X (whether AI will take jobs), but prefers to focus in the article on Y (“the many ways that we can deskill ourselves”).

I agree that may be the charitable interpretation. But so often these points are phrased in a way that directly attempts to dismiss someone else's concerns ("the real danger with AI isn't the thing you're worried about, it's the thing I'm worried about!"). I feel like they shouldn't be doing that unless they're going to present some kind of reasoning that supports both pillars of that claim.

There's nothing stopping them from simply saying "an under-recognized problem with AI is Y, let me explain why it should be a concern".

Expressing the article as an attack on X, when in fact the author hasn't even put five minutes of thought into evaluating X, is just a click bait strategy. People go in expecting something that undermines X, but they don't get it.