The union rep gets it - people improvise when you cut their tools and then threaten discipline for improvising.

That memo is how you make staff hide things instead of asking for help.

The scarier part though is that LLM-written clinical notes probably look fine. That's the whole problem. I built a system where one AI was scoring another AI's work, and it kept giving high marks because the output read well. I had to make the scorer blind to the original coaching text before it started catching real issues. Now imagine that "reads well, isn't right" failure mode in clinical documentation.

Nobody's re-reading the phrasing until a patient outcome goes wrong.

Physicians need to have it pounded into them that every hallucination is downstream harm. AI has no place in medicine. If they insist on it, then all transcripts must be stored with the raw audio. Which should be accessible side by side, with lines of transcript time coded. It's the only way to actually use these safely, while guarding against hallucinations.

> Physicians need to have it pounded into them that every hallucination is downstream harm.

I think any person using 'AI' knows it makes mistakes. In a medical note, there are often errors at present. A consumer of a medical note has to decide what makes sense and what to ignore, and AI isn't meaningfully changing that. If something matters, it's asked again in follow up.

Raw audio is a cool idea! I've seen a similar approach in other domains, "keep the source of truth accessible so you can verify the AI output against it".

I wouldn't go as far as "no place in medicine" though. The Heidi scribe tool mentioned in the article is a good example, because in the end it's the doctor who reviews and signs off.

IMO the problem is AI doing the work with no human verification step, but I can 100% agree I don't want to have vibe-doctor for my next surgery/consult :D