While I agree with some of these observations - the research cited in the article really do not match the claims at all from what I can tell.

> An NBER study of support agents [2] found generative AI boosted novice productivity by about a third while barely helping experts. Harvard Business School researchers found the same pattern in consulting work [3].

The first work cited was a research study on GPT-3(!) from 2020. Which is a barely coherent model relative to today's SOTA.

The second HBS research study literally finds the opposite of what's claimed:

> we observed performance enhancements in the experimental task for both groups when leveraging GPT-4. Note that the top-half-skill performers also received a significant boost, although not as much as the bottom-half-skill performers.

Where bottom-half skilled participants with AI outperformed top-half skilled participants without AI. (And top-half skilled participants gained another 11% improvement when pared with AI). Again, GPT-4 model intelligence (3 years ago) is a far cry from frontier models today.

These issues align with what I found, and make this other comment more explainable:

> Having trouble understanding the final line:

> > Also, those that claimed this article is ironically a casualty of it’s own complaint are 100% right, Kudos.

> Why would the article be a casualty of its own complaint?

The author probably sourced the article using AI; the sources don’t quite align in the way they often don’t when sourced by AI.