I'm curious how many people would want a second opinion (from a human) if they're presented with a bad discovery from a radiological exam and are then told it was fully automated.

I have to admit if my life were on the line I might be that Karen.

A bad discovery probably means your exam will be read by someone qualified, like the surgeon/doctor tasked with correcting it.

False negatives are far more problematic.

Ah, you're right. Something else I'm curious about with these systems is how they'll affect difficulty level. If AI handles the majority of easy cases, and radiologists are already at capacity, so they crack if the only cases they evaluate are now moderately to extraordinarily difficult?

Let's look at mammography, since that is one of the easier imaging exams to evaluate. Studies have shown that AI can successfully identify more than 50% of cases as "normal" that do not require a human to view the case. If group started using that, the number of interpreted cases would drop in half although twice as many would be normal. Generalizing to CT of the abdomen and pelvis and other studies, assuming AI can identify a sub population of normal scans that do not have to be seen by a radiologist, the volume of work will decline. However, the percentage of complicated cases will go up. Easy, normal cases will not be supplementing the Radiologist income the way it has in the past. Of course, all this depends upon who owns the AI identifying normal studies. Certainly, hospitals or even packs companies would love to own that and generate that income from interpreting the normal studies. AI software has been slow to be adopted, largely because cases still have to be seen by a radiologist, and the malpractice issue has not been resolved. Expect rapid changes in the field once malpractice solutions exist.

The problem is, you don't know beforehand if it's a hard case or not.

A hard to spot tumor is an easy negative result with high confidence by an AI

From my experience the best person to read these images is the medical imaging expert. The doctor who treats the underlying issue is qualified but it's not their core competence. They'll check of course but I don't think they generally have a strong basis to override the imaging expert.

If it's something serious enough a patient getting bad news will probably want a second opinion no matter who gave them the first one.

But since we don't know where those false negatives are, we want radiologists.

I remember a funny question that my non-technical colleagues asked me during the presentation of some ML predictions. They asked me, “How wrong is this prediction?” And I replied that if I knew, I would have made the prediction correct. Errors are estimated on a test data set, either overall or broken down by groups.

The technological advances have supported medical professionals so far, but not substituted them: they have allowed medical professionals to do more and better.

I willing to bet every one here has a relative or friend who at some point got a false negative from a doctor.. Just like drivers that have made accidents.. Core problem is how to go about centralizing liability.. or not.

Id be more concerned about the false negative. My report says nothing found? Sounds great, do I bother getting a 2nd opinion?

You pay extra for a doctor's opinion. Probably not covered by insurance.

That's horrific. You pay insurance to have ChatGPT make the diagnosis. But you still need to pay out of pocket anyway. Because of that, I am 100% confident this will become reality. It is too good to pass up.

Early intervention is generally significantly cheaper, so insurers have an interest in doing sufficiently good diagnosis to avoid unnecessary late and costly interventions.

People will flock to "AI medical" insurance that costs $50/mo and lets you see whatever AI specialist you want whenever you want.

Think a problem here is the sycophantic nature. If I’m a hypochondriac, and I have some new onset symptoms, and I prompt some LLM about what I’m feeling and what I suspect, I worry it’ll likely positively reinforce a diagnosis I’m seeking.

I mean, we already have deductibles and out-of-pocket maximums. If anything, this kind of policy could align with that because it's prophylactic. We can ensure we maximize the amount we retrieve from you before care kicks in this way. Yeah, it tracks.

It sounds fairly reasonable to me to have to pay to get a second opinion for a negative finding on a screening. (That's off-axis from whether an AI should be able to provide the initial negative finding.)

If we don't allow this, I think we're more likely to find that the initial screening will be denied as not medically indicated than we are to find insurance companies covering two screenings when the first is negative. And I think we're better off with the increased routine screenings for a lot of conditions.

Self-care is being Karen since when?

It's not. I was trying to evoke a world where it's become so common place that you're a nuisance if you're one of those people who questions it.

Need to work on the comedic delivery in written form because you just came off as leaning on a stereotype

[deleted]

"Cancer? Me? I'd like to speak to your manager!"

In reality it's always a good decision to seek a second independent assessment in case of diagnosis of severe illness.

People makes mistakes all the time, you don't want to be the one affected by their mistake.