Risks in traditional medicine are standardized by standardized training and credentialing. We haven't established ways to evaluate the risks of transferring diagnostic responsibility to AIs.
> All that is needed is an external actor to take the risk and show a step change improvement
Who's going to benefit? Doctors might prioritize the security of their livelihood over access to care. Capital will certainly prioritize the bottom line over life and death[0].
The cynical take is that for the time being, doctors will hold back progress, until capital finds a way to pay them off. Then capital will control AI and control diagnosis, letting them decide who is sick and what kind of care they need.
The optimistic take is that doctors maintain control but embrace AI and use it to increase the standard of care, but like you point out, the pace of that might be generational instead of keeping pace with technological progress.
[0] https://www.nbcnews.com/news/us-news/death-rates-rose-hospit...
Having paid $300 for a 10 minute doctor visit, in which I was confidently diagnosed incorrectly, it will not take much for me to minimize my doctor visits and take care into my own hands whenever possible.
I will benefit from medical AI. There will soon come a point where I will pay a premium for my medical care to be reviewed by an AI, not the other way around.
If you’d trust generative AI over a physician, go in wide-eyed knowing that you’re still placing your trust in some group of people. You just don’t have an individual to blame if something goes wrong, but rather the entire supply chain that brings the model and its inference. Every link in that chain can shrug their shoulders and point to someone else.
This may be acceptable to you as an individual, but it’s not to me.
You might pay for a great AI diagnosis, but what matters is the diagnosis recognized by whoever pays for care. If you depend on insurance to pay for care, you're at the mercy of whatever AI they recognize. If you depend on a socialized medical care plan, you're at the mercy of whatever AI is approved by them.
Paying for AI diagnosis on your own will only be helpful if you can shoulder the costs of treatment on your own.
At least you can dodge false diagnosis which is important especially when it can cause irreversible damage to your body
Under the assumption that AI has perfect accuracy. Perhaps you dodged the correct diagnosis and get to die 6 months later due to the lack of treatment. Might as well flip a coin.
Doesn't have to be "perfect accuracy". It just has to beat the accuracy of the doctor you would have gone to otherwise.
Which is often a very, very low bar.
What do you call a doctor who was last in his class in medical school? A doctor.
> Doesn't have to be "perfect accuracy". It just has to beat the accuracy of the doctor you would have gone to otherwise.
They made an absolute statement claiming that AI will "at least" let them dodge false diagnosis, that implies a diagnostic false positive rate of ~0%. Otherwise how can you possibly be so confident that you "dodged" anything? You still need a second opinion (or third).
If a doctor diagnosed you with cancer and AI said that you're healthy, would you conclude that the diagnosis was false and skip treatment? It's easy to make frivolous statements like these when your life isn't on the line.
> What do you call a doctor who was last in his class in medical school? A doctor.
How original, they must've passed medical school, certification, and years of specialization by pure luck.
Do you ask to see every doctor's report card before deciding to go with the AI or do you just assume they're all idiots?
And what's the bar for people making machine learning algos? What do you call a random person off the street? A programmer.