I don't think the legal framework even allows the patient to make that trade off. Can a patient choose 99.9% accuracy instead of 99.95% accuracy and also waive the right to a malpractice lawsuit?
You know the crazy thing about this? For this application I think it’s similar to spam. AI can easily be trained to be better than a human.
And it’s definitely not a 0.05 percent difference. AI will perform better by a long shot.
Two reasons for this.
1. The AI is trained on better data. If the radiologist makes a mistake that mistake is identified later and then the training data can be flagged.
2. No human indeterminism. AI doesn’t get stressed or tired. This alone even without 1. above will make AI beat humans.
Let’s say 1. was applied but that only applies for consistent mistakes that humans make. Consistent mistakes are eventually flagged and shows up as a pattern in training data and the AI can learn it even though humans themselves never actually notice the pattern. Humans just know that the radiologists opinion was wrong because a different outcome happened, we don’t even have to know why it was wrong and many times we can’t know… just flagging the data is enough for the AI to ingest the pattern.
Inconsistent mistakes comes from number 2. If humans make mistakes that are due to stress the training data reflecting those mistakes will be minuscule in size and also random without pattern. The average majority case of the training data will smooth these issues out and the model will remain consistent. Right? A marker that follows a certain pattern shows up 60 times in the data but one time it’s marked incorrectly because of human error… this will be smoothed out.
Overall it will be a statistical anomaly that defies intuition. Similar to how flying in planes is safer than driving. ML models in radiology and spam will beat humans.
I think we are under this delusion that all humans are better than ML but this is simply not true. You can thank LLMs for spreading this wrong intuition.
I think its the other way around AI would certainly have better accuracy than a human, AI can see things pixel by pixel.
You can take a 4k photo of anything, change one pixel to pure white and a human wouldn't be able to find this pixel by looking at the picture with their eyes. A machine on the other hand would be able to do it immediately and effortlessly.
Machine vision is literally superhuman, For example Military camo can easily fool human eyes. But a machine can see through it clear as day. Because they can tell the difference between
This is exactly the tradeoff that works in healthcare of poor countries, mostly because the alternative is no healthcare
I don't think the legal framework even allows the patient to make that trade off. Can a patient choose 99.9% accuracy instead of 99.95% accuracy and also waive the right to a malpractice lawsuit?
You know the crazy thing about this? For this application I think it’s similar to spam. AI can easily be trained to be better than a human.
And it’s definitely not a 0.05 percent difference. AI will perform better by a long shot.
Two reasons for this.
1. The AI is trained on better data. If the radiologist makes a mistake that mistake is identified later and then the training data can be flagged.
2. No human indeterminism. AI doesn’t get stressed or tired. This alone even without 1. above will make AI beat humans.
Let’s say 1. was applied but that only applies for consistent mistakes that humans make. Consistent mistakes are eventually flagged and shows up as a pattern in training data and the AI can learn it even though humans themselves never actually notice the pattern. Humans just know that the radiologists opinion was wrong because a different outcome happened, we don’t even have to know why it was wrong and many times we can’t know… just flagging the data is enough for the AI to ingest the pattern.
Inconsistent mistakes comes from number 2. If humans make mistakes that are due to stress the training data reflecting those mistakes will be minuscule in size and also random without pattern. The average majority case of the training data will smooth these issues out and the model will remain consistent. Right? A marker that follows a certain pattern shows up 60 times in the data but one time it’s marked incorrectly because of human error… this will be smoothed out.
Overall it will be a statistical anomaly that defies intuition. Similar to how flying in planes is safer than driving. ML models in radiology and spam will beat humans.
I think we are under this delusion that all humans are better than ML but this is simply not true. You can thank LLMs for spreading this wrong intuition.
I think its the other way around AI would certainly have better accuracy than a human, AI can see things pixel by pixel.
You can take a 4k photo of anything, change one pixel to pure white and a human wouldn't be able to find this pixel by looking at the picture with their eyes. A machine on the other hand would be able to do it immediately and effortlessly.
Machine vision is literally superhuman, For example Military camo can easily fool human eyes. But a machine can see through it clear as day. Because they can tell the difference between
Black Hex #000000 RGB 0, 0, 0 CMYK 0, 0, 0, 100
and
Jet Black Hex #343434 RGB 52, 52, 52 CMYK 0, 0, 0, 80