> Right, the reason why I pointed out "recent" is that it's new evidence that people might not be aware of, given that there were also earlier studies showing AI had the opposite effect on inequality.

Okay, well the majority of this "recent" evidence agrees with the pre-existing evidence that inequality is reduced.

> "Debating points" is uncommon?

Yes. That is nobody's job. Maybe every now and then you might need to come up with some arguments to support a position, but that's not what you get paid to do day to day.

> You're also conveniently omitting "investment decisions" and "profits and revenue", which basically everyone is trying to optimize.

Very few people are making investment decisions as part of their day to day job. Hedge funds may experience increasing inequality, but that kinda seems on brand.

On the other hand "profits and revenue" is not a task.

> You might be tempted to think "Coding efficiency" represents a high complexity task, but the abstract says the test involved "Recruited software developers were asked to implement an HTTP server in JavaScript as quickly as possible". The same is true of the task used in the "legal analysis" study, which involved drafting contracts or complaints.

These sound like real tasks that a decent number of people have to do on a regular basis.

> Meanwhile the studies with negative results were far more realistic and measured actual results. Otis et al 2023 measured profits and revenue of actual Kenyan SMBs. Roldan-Mones measured debate performance as judged by humans.

These sound like niche activities that are not widely applicable.