But this indicates lack of incentives to reduce healthcare costs by optimisation. If AI can do something well enough , and AI + humans surpass humans leading to costs reductions/ increased throughput this should be reflected in the workflows.

I feel that human processes have inertia and for lack of a better word, gatekeepers feel that new, novel approaches should be adopted slowly and which is why we are not seeing the impact, yet. Once a country with the right incentive structure (e.g. China ) can show that it can outperform and help improve the overall experience I am sure things will change.

While 10 years progress is a lot in ML, AI , in more traditional fields it probably is a blip to change this institutional inertia which will change generation by generation. All that is needed is an external actor to take the risk and show a step change improvement. Having experienced how healthcare in US I feel people are only scared to take on bold challenges

Three things explain this. First, while models beat humans on benchmarks, the standardized tests designed to measure AI performance, they struggle to replicate this performance in hospital conditions. Most tools can only diagnose abnormalities that are common in training data, and models often don’t work as well outside of their test conditions. Second, attempts to give models more tasks have run into legal hurdles: regulators and medical insurers so far are reluctant to approve or cover fully autonomous radiology models. Third, even when they do diagnose accurately, models replace only a small share of a radiologist’s job. Human radiologists spend a minority of their time on diagnostics and the majority on other activities, like talking to patients and fellow clinicians

From the article

Another key extract from the article

> The performance of a tool can drop as much as 20 percentage points when it is tested out of sample, on data from other hospitals. In one study, a pneumonia detection model trained on chest X-rays from a single hospital performed substantially worse when tested at a different hospital.

That screams of over fitting to the training data.

Because that is literally happening. I did a bit of work developing some radiological models and sample size for healthy vs malignant is usually 4 to 1. Then you modify the error function so that it makes malignants more significant (you are quite often working with datasets as low as 500 images, so 80/20 training validation split means you are left with 80 examples of malignant) which means that as soon as you take a realistic sample where one specific condition maybe appears in 1/100 or 1/1000 the false positives make your model practically useless.

Of course SOTA models are much better, but getting medical data is quite difficult and expensive so there is not a lot of them.

Remember, the AI doesn’t create anything, so you add risk potentially to the patient outcome and perhaps make advancement more difficult.

My late wife had to have a stent placed in a vein in her brain to relieve cranial pressure. We had to travel to to New York for an interventional radiologist and team to fish a 7 inch stent and balloon from her thigh up.

At the time, we had to travel to NYC, and the doctor was one of a half dozen who could do the procedure in the US. Who’s going to train the future physician the skills needed to develop the procedure?

For stuff like this, I feel like AI is potentially going to erase certain human knowledge.

> Who’s going to train the future physician the skills needed to develop the procedure?

i would presume that AI taking over won't erase the physical work, which would mean existing training regimes will continue to exist.

Until one day, an AI robot is capable of performing such a procedure, which would then mean the human job becomes obsolete. Like a horse-drawn coach driver - that "job" is gone today, but nobody misses it.

Performing the procedure requires a high level of skill in interpreting scans (angiograms) in real time.

Yeah there’s no more drivers out there, bro. Lol.

The assumption is that more productive AI + humans leads to cost reductions.

But if everyone involved has a profit motive, you end up cutting at those cost reductions. "We'll save you 100 bucks, so give us 50", done at the AI model level, the AI model repackager, the software suite that the hospital is using, the system integrators that manage the software suite installation for the hospital, the reseller of the integrator's services through some consultancy firm, etc etc.

There are so many layers involved, and each layer is so used to trying to take a slice, and we're talking about a good level of individualization in places that aren't fully public a la NHS, that the "vultures" (so to speak) are all there ready to take their cut.

Maybe anathema to say on this site, but de-agglomeration really seems to have killed just trying to make things better for the love of the game.

Nobody has a profit motive since doctors get their bills paid per procedure and health insurers have a profit cap.

Consider that the profit cap is a percentage, so increased costs in fact increase the amount of profits to be scooped up. So health insurers that would like to see more cash are incentivized to have costs increase!

I also think that the profit cap percentage is not something that applies across the board to every single player in the healthcare space.

Wait, explain. The insurer thing, I get: they're capped. The doctors seem definitely to have a profit motive!

I think the idea there is that generally speaking doctors get rebated on some sort of defined cost schedule. They'll get $40 from the insurance company for some basic kind of visit, and that is "fixed"...

I don't live in the US and when I did wasn't paying doctors very often... but my impression was that even if the rebate schedule is fixed they could "just" ask for more/less, and the rebate schedule is defined by the insurance company (so the insurance company can increase their costs through this schedule, leading to ways to make profit elsewhere!)

I could be totally offbase, I've always thought the Obamacare profit percentage cap to be a fake constraint.

The doctor doesn’t have a motive to replace himself with an AI? That’s what I meant.

From real world experience as a patient that has had a lot go wrong over the last decade. The problem isn’t lack of automation, it’s structural issues affecting cost.

Just as one example a chest CT would’ve cost $450 if done cash. It costed an insurer over $1200 done via insurance. And that was after multiple appeals and reviews involving time from people at the insurance company and the providers office including the doctor himself. The low hanging fruit in American healthcare costs is the stuff like that.

Calling that "low hanging fruit" isn't accurate, because entrenched and powerful interests benefit from it being kept that way. That extra $750 is valuable to the capitalist that gets it. The jobs to process those appeals and reviews are valuable to the employees who do them. Deleting all of this overnight will fuck these people over to varying degrees, and it could even have macroeconomic implications.

With that said, although it will not be easy, this shit needs to change. Health care in the United States is unacceptably expensive and of poorer quality than it needs to be.

Risks in traditional medicine are standardized by standardized training and credentialing. We haven't established ways to evaluate the risks of transferring diagnostic responsibility to AIs.

> All that is needed is an external actor to take the risk and show a step change improvement

Who's going to benefit? Doctors might prioritize the security of their livelihood over access to care. Capital will certainly prioritize the bottom line over life and death[0].

The cynical take is that for the time being, doctors will hold back progress, until capital finds a way to pay them off. Then capital will control AI and control diagnosis, letting them decide who is sick and what kind of care they need.

The optimistic take is that doctors maintain control but embrace AI and use it to increase the standard of care, but like you point out, the pace of that might be generational instead of keeping pace with technological progress.

[0] https://www.nbcnews.com/news/us-news/death-rates-rose-hospit...

Having paid $300 for a 10 minute doctor visit, in which I was confidently diagnosed incorrectly, it will not take much for me to minimize my doctor visits and take care into my own hands whenever possible.

I will benefit from medical AI. There will soon come a point where I will pay a premium for my medical care to be reviewed by an AI, not the other way around.

If you’d trust generative AI over a physician, go in wide-eyed knowing that you’re still placing your trust in some group of people. You just don’t have an individual to blame if something goes wrong, but rather the entire supply chain that brings the model and its inference. Every link in that chain can shrug their shoulders and point to someone else.

This may be acceptable to you as an individual, but it’s not to me.

You might pay for a great AI diagnosis, but what matters is the diagnosis recognized by whoever pays for care. If you depend on insurance to pay for care, you're at the mercy of whatever AI they recognize. If you depend on a socialized medical care plan, you're at the mercy of whatever AI is approved by them.

Paying for AI diagnosis on your own will only be helpful if you can shoulder the costs of treatment on your own.

At least you can dodge false diagnosis which is important especially when it can cause irreversible damage to your body

Under the assumption that AI has perfect accuracy. Perhaps you dodged the correct diagnosis and get to die 6 months later due to the lack of treatment. Might as well flip a coin.

Doesn't have to be "perfect accuracy". It just has to beat the accuracy of the doctor you would have gone to otherwise.

Which is often a very, very low bar.

What do you call a doctor who was last in his class in medical school? A doctor.

> Doesn't have to be "perfect accuracy". It just has to beat the accuracy of the doctor you would have gone to otherwise.

They made an absolute statement claiming that AI will "at least" let them dodge false diagnosis, that implies a diagnostic false positive rate of ~0%. Otherwise how can you possibly be so confident that you "dodged" anything? You still need a second opinion (or third).

If a doctor diagnosed you with cancer and AI said that you're healthy, would you conclude that the diagnosis was false and skip treatment? It's easy to make frivolous statements like these when your life isn't on the line.

> What do you call a doctor who was last in his class in medical school? A doctor.

How original, they must've passed medical school, certification, and years of specialization by pure luck.

Do you ask to see every doctor's report card before deciding to go with the AI or do you just assume they're all idiots?

And what's the bar for people making machine learning algos? What do you call a random person off the street? A programmer.

[deleted]

Part of the challenge is that machines are significantly different. The radiologist’s statement that an object measured from two different machines is the same and has not changed in size is in large part judgement. Building a model which can replicate this judgement likely involves building a model which can solve all common computer vision tasks, has the full medical knowledge of an expert radiologist, and has been painstakingly calibrated against thousands of real radiologists in hospital conditions.

> If AI can do something well enough , and AI + humans surpass humans leading to costs reductions/ increased throughput this should be reflected in the workflows.

But it doesn't lead to increased throughput because there needs to be human validation when people's lives are on the line.

Planes fly themselves these days, it doesn't increase the "throughout" or eliminate the need for a qualified pilot (and even a copilot!)

The article points out that the AI + humans approach gives poorer results. Humans end up deferring to or just accepting the AI output without double checking. So corner cases, and situations where the AI doesn't do well just end up going through the system.

This is what I worry about - when someone gets a little lazy and leans too heavily on the tool. Perhaps their skills diminish over time. It seems AI could be used to review results after an analysis. That would be ok to me, but not before.

If we were serious about reducing healthcare cost by optimization then we would be banning private equity from acquiring hospitals.

What is there to indicate "we" or anyone is serious about reducing healthcare costs? The only thing that will reduce costs is competitive pressure. The last major healthcare reform in the US was incredibly anti-competitive and designed with a goal of significantly raising costs but transferring those costs to the government. How could healthcare costs ever go down when the ONLY way for insurers to increase profits is for costs to go up as their profit is capped at a percentage of expenses.

>...The only thing that will reduce costs is competitive pressure.

Unfortunately, just yesterday there were a surprising amount of people who seemed to argue that increased competition would at best have no effect, and at worst, would actually increase prices:

https://news.ycombinator.com/item?id=45372442

> What is there to indicate "we" or anyone is serious about reducing healthcare costs?

I agree, we clearly aren’t. That’s my point.

Or, maybe artifacts justify prices less so than amounts of souls bothered will. Robotic medical diagnosis could save costs, but it could suppress customers' appetite too, in which case, like you said, commercial healthcare providers would not be incentivized to offer it.

"AI" literally could not care if you live or die.

That's more than a problem of inertia

I think the one thing we will find out with the AI/Chatbot/LLM boom is: Most economic activity is already reasonably close to a local optimum. Either you find a way to change the whole process (and thereby eliminate steps completely) or you won't gain much.

That's true for AI-slop-in-the-media (most of the internet was already lowest effort garbage, which just got that tiny bit cheaper) and probably also in medicine (a slight increase in false negatives will be much, much more expensive than speeding up doctors by 50% for image interpretation). Once you get to the point where some other doctor is willing (and able) to take on the responsibility of that radiologist, then you can eliminate that kind of doctor (but still not her work. Just the additional human-human communication)

I mean the company providing the AI is free to assume malpractice insurance. If that happens then there is definitely a chance.

If statistically their error rate is better or around what a human does then their insurance is a factor of how many radiologists they intend to replace.