I'll repeat my idea on how this MUST be done:

1. AI gets data about the patient and makes a diagnosis. This is NOT shown to doctor yet.

2. Doctor does their stuff, writes down their diagnosis. This diagnosis is locked down and versioned.

3. Doctor sees AI's diagnosis

4. Doctor can adjust their diagnosis, BUT the original stays in the system.

This way the AI stays as the assistant and won't affect the doctor's decision, but they can change their mind after getting the extra data.

5. Private Equity uses this valuable data to stack rank doctors based on how correct / AI-aligned their diagnoses are over time

6. Rankings are used to periodically "trim the fact" thus delivering more optimized cash flows to clinics that have been saddled with toxic debt

7. Sensing an opportunity AI providers start selling a $200 / month Data Leakage as a Service subscription to overworked physicians so that they can avoid the PE guillotine

A more realistic step 7 is that physicians gradually align their diagnoses with the LLM as they sacrifice to Moloch in order to (temporarily) game the metric. Eventually the humans become little more than an imperfect proxy for the LLMs and are eliminated.

I agree with GP's solution but we'd need regulation to prohibit what you describe.

[dead]

Why would private equity want more competent doctors?

Incompetent ones order unnecessary tests and exhaust treatment possibilities, which drives up cost billed to insurance.

Only the insurance industry and perhaps licensing bodies can pressure to keep the quality floor high, at least in terms of accurate diagnosis and prevention of overtreatment.

This still promotes metacognitive laziness later down the road as the doctor can hand in something quickly and rely on AI to close that gap.

The magic is in the initial diagnosis being written down, saved and locked.

It's trivial to analyse the pre/post AI involvement doctor diagnosis manually and see what's going on.

If a doctor is just putting "asdljasdaskjd" on the initial to unlock the AI answer, they should be promptly fired.

5. Doctors delegate everything to AI assistants because humans are lazy, especially if those AI assistants are correct some significant portion of the time

Then the claim may be that you don't need that many doctors anymore and that one doctor can do the job of X doctors in less time which has the economical effect that there is less demand for/supply of doctors, which then results in a home grown shortage of doctors, since less people are incentivized to become doctors...

Step 2 prevents that. It's not there by accident.

They need to write down their (initial) diagnosis before the AI answer is shown.

Step 2 doesn't prevent it, because of step 4. AI becomes "upon further testing/examination/review we conclude that..."

And then if the patient isn't cured or has an adverse reaction, the answer given by the doctor in step 2 is examined compared to the post-AI resolution.

If #2 is correct and #4 wrong, the doctor has to answer for stuff.