The only part of this article I believe is the legal and bureaucratic burdens part.

"Human radiologists spend a minority of their time on diagnostics and the majority on other activities, like talking to patients and fellow clinicians"

I've had the misfortune of dealing with a radiologist or two this year. They spent 10-20 minutes talking about the imaging and the results with me. What they said was very superficial and they didn't have answers to several of the questions I asked.

I went over the images and pathology reports with ChatGPT and it was much better informed, did have answers for my questions, and had additional questions I should have been asking. I've used ChatGPT's information on the rare occasions when doctors deign to speak with me and it's always been right. Me, repeating conclusions and observations ChatGPT made, to my doctors, has twice changed the course of my treatment this year, and the doctors have never said anything I've learned from ChatGPT is wrong. By contrast, my doctors are often wrong, forgetful, or mistaken. I trust ChatGPT way more than them.

Good image recognition models probably are much better than human radiologists already and certainly could be vastly better. One obstacle this post mentions - AI models "struggle to replicate this performance in hospital conditions", is purely a choice. If HMOs trained models on real data then this would no longer be the case, if it is now, which I doubt.

I think it's pretty clearly doctors, and their various bureaucratic and legal allies, defending their legal monopoly so they can provide worse and slower healthcare at higher prices, so they continue to make money, at the small cost of the sick getting worse and dying.