>All of the studies were done 2023-2024 and are not listed in order that they were conducted
Right, the reason why I pointed out "recent" is that it's new evidence that people might not be aware of, given that there were also earlier studies showing AI had the opposite effect on inequality. The "recent" studies also had varied methodology compared to the earlier studies.
>The studies showing reduced equality all apply to uncommon tasks like material discovery and debate points
"Debating points" is uncommon? Maybe not everyone was in the high school debate club, but "debating points" is something that anyone in a leadership position does on a daily basis. You're also conveniently omitting "investment decisions" and "profits and revenue", which basically everyone is trying to optimize. You might be tempted to think "Coding efficiency" represents a high complexity task, but the abstract says the test involved "Recruited software developers were asked to implement an HTTP server in JavaScript as quickly as possible". The same is true of the task used in the "legal analysis" study, which involved drafting contracts or complaints. This seems exactly like the type of cookie cutter tasks that the article describes would become like cashiers and have their wages stagnate. Meanwhile the studies with negative results were far more realistic and measured actual results. Otis et al 2023 measured profits and revenue of actual Kenyan SMBs. Roldan-Mones measured debate performance as judged by humans.
> Right, the reason why I pointed out "recent" is that it's new evidence that people might not be aware of, given that there were also earlier studies showing AI had the opposite effect on inequality.
Okay, well the majority of this "recent" evidence agrees with the pre-existing evidence that inequality is reduced.
> "Debating points" is uncommon?
Yes. That is nobody's job. Maybe every now and then you might need to come up with some arguments to support a position, but that's not what you get paid to do day to day.
> You're also conveniently omitting "investment decisions" and "profits and revenue", which basically everyone is trying to optimize.
Very few people are making investment decisions as part of their day to day job. Hedge funds may experience increasing inequality, but that kinda seems on brand.
On the other hand "profits and revenue" is not a task.
> You might be tempted to think "Coding efficiency" represents a high complexity task, but the abstract says the test involved "Recruited software developers were asked to implement an HTTP server in JavaScript as quickly as possible". The same is true of the task used in the "legal analysis" study, which involved drafting contracts or complaints.
These sound like real tasks that a decent number of people have to do on a regular basis.
> Meanwhile the studies with negative results were far more realistic and measured actual results. Otis et al 2023 measured profits and revenue of actual Kenyan SMBs. Roldan-Mones measured debate performance as judged by humans.
These sound like niche activities that are not widely applicable.