Imho, the first thing doctors need to learn (at least in my country) is to touch type. I've had it with 5 min exams followed by 15 minutes of pecking to type in the necessary forms. Multiply by number of patients in a day and it adds up, and it's prevalent, family doctors, dentists, specialists, nobody bothers to learn it. Gets tiresome when you know you're in the waiting room for a couple of hours because they are slow at typing.

I used to do support for a service that did transcription for doctors. The doctors would call in and tell the medical transcriptionist what to type and they would do the input.

It always seemed incredibly inefficient and expensive but hospital management told me this was the most dependable way to get accurate records and even a single lost lawsuit would cost more than the service.

It's stupid, but that's the world we live in.

No. Just no. Teaching doctors touch typing is tending to the secondary symptom of the fact doctors should not waste time inputting routine data.

What doctors need would be secretarial services trained in medical procedures.

And by the way, when I was a child, even before the computers came, here is how it worked in Russia.

The doctor was listening to my breathing, looking at the throat, asking me and my mother questions, and saying various medical phrases to her assistant, who was then writing them into my patient records (a thick paper notebook).

This is how all the dentists work that I've seen. Doctor plus nurse. Apparently dentists have more agency over their work environment than doctors do.

I think this is one of the use cases where speech-to-text and (AI) transcription tools would be useful. Of course ideally there'd be two people, one doing the medical stuff and the other then documentation, but health care is expensive enough as it is.

Medical scribes are a thing. Some provider organizations employee people who attend patient encounters and do all the EHR data entry in order to free up clinicians for higher value work. This generally works well, but it is expensive and payers don't directly reimburse for that service.

All the dentists I've ever visited have worked in doctor/nurse pairings. The nurse assists in operations AND is the data entry expert.

I think it's just about bureaucratic faux-economical thinking infringing to doctors workspace cutting overall effectiveness.

It turns out that peach to text is slower than dictating and having a typist type.

The speed at which reports are dictated is incredible and even when familiar with the field it’s hard to understand how the typists are getting it right.

> Of course ideally there'd be two people, one doing the medical stuff and the other then documentation, but health care is expensive enough as it is.

In the 1980s USSR, every doctor actually had a nurse who did the paperwork. And somehow, healthcare was still free.

[deleted]

What we need is a universal standard way to store all of our personal data on our phone and share whatever is relevant at whatever company/government at the touch of a button.

Nor a secretary nor a doctor nor anybody should have to hand-type data that already exists digitally.

I'm so mind-blown that this doesn't exist yet that I feel maybe I should try and build it. I have tried building the next-best thing: OCR based form filling, but hard to get far as a solo FOSS'er.

" this doesn't exist yet"

We have a national health database in Finland called "OmaKanta" (which translates to MyDatabase or something like that). It's not perfect but at least I can trust it with most of my health records, and it's accessible to all doctors working in both public and private sector.

Many healthcare provider organizations have standard HL7 FHIR APIs that patients can use to download their own chart records. There are a variety of apps that you can use to call those APIs.

Im talking about a standard GLOBAL way of sharing that exact same data AND all other personal data.

FHIR is a global standard.

I wonder if the Solid protocol might be helpful here? [0] I must confess I haven't toyed with it so far, but I am looking for an excuse to try it out.

[0]: https://solidproject.org/

Looks cool, but is more abstract/low-level than what I mean. Could maybe be used as a foundation for it though.

Problem: there are 19 competing standards

New problem: there are 20 competing standards

There are 0 standards for global sharing of all possible personal data. That I know of.

Touch typing for doctors seems a waste of time now that Dragon / Whisper / your phone can do Speech to text quickly and relatively reliably.

Sure, let’s send private medical data to a cloud server somewhere for processing, because a medical professional in 2025 can’t be expected to know how to use a keyboard. That’s absurd.

I can type quite well. I can also troubleshoot minor IT issues. Neither is a better use of my time than seeing patients.

I’m in an unusual situation as an anesthesiologist; I don’t have a clinic to worry about, so my rate-limiting factor isn’t me, it’s the surgeon. EMR is extremely helpful for me because 90% of my preop workup is based on documentation, and EMR not only makes that easy but lets me do it while I still have the previous patient under anesthesia. I actually need to talk to 95% of patients for about 30 seconds, no more.

But my wife is primarily a thinking rather than doing doctor, and while she can type well, why in the hell do we want doctors being typists for dictation of their exams? Yes, back in the old days, doctors did it by hand, but they also wrote things like “exam normal” for a well-baby visit. You can’t get paid for that today; you have to generate a couple of paragraphs that say “exam normal”.

Incidentally, as for cloud service, if your hospital uses Epic, your patients’ info is already shared, so security is already out of your hands.

This has been happening for years, long pre-dating LLMs or the current AI hype. There are a huge number of companies in the medical transcription space.

Some are software companies that ingest data to the cloud as you say. Some are remote/phone transcription services, which pass voice data to humans to transcribe it. Those humans then store it in the cloud when it is returned to a doctor's office. Some are EMR-integrated transcription services which are either cloud-based with the rest of the EMR or, for on-premise EMRs, ship data to/from the cloud for transcription.

Macs have pretty decent on-device transcription these days. That’s what I set up for my wife and her practice’s owner for dictation because a whole lot of privacy issues disappear with that setup.

The absurdity is that doctors have to enter a metric shit ton of information after every single visit even when there’s no clearly compelling need for it for simple office visits beyond “insurance and/or Medicare” requires it. If you’re being seen for the first time because of chest pain, sure. If you’re returning for a follow up for a laceration you had sewn closed, “patient is in similar condition as last time, but the wound has healed and left a small scar” would be medically sufficient. Alas, no, the doctor still has to dictate “Crime and Punishment” to get paid.

Most EHRs are sending that text input to the cloud for storage anyway. Voice transcription is already a feature of some EHRs.

Medical companies could self host their speech to text translation. At the end the medical data is also on some servers stored. So doing speech -> text translation seems just efficient and not too much worrying if done properly.

So you think the better solution to doctors not being able to try is for them to self-host a speech to text translation systems, rather than teaching doctors to type faster?

Their healthcare/IT provider like Epic would do it. And in fact some have already done it, from what I can see.

Furthermore, preparing/capturing docs is just one type of task specialization and isn’t that crazy: stenographers in courtrooms or historically secretaries taking dictation come to mind. Should we throw away an otherwise perfectly good doctor just for typing skills?

Who is responsible when the speech-to-text model (which often works well, but isn’t trained on the thousands of similar-sounding drug names) prescribes Klonopin instead of Clonidine and the patient ends up in a coma?

These models definitely aren’t foolproof, and in fact have been known to write down random stuff in the absence of recognisable speech: https://koenecke.infosci.cornell.edu/files/CarelessWhisper_E...

This isn't a speech recognition problem per se. The attending physician is legally accountable regardless of who does the transcription. Human transcriptionists also make mistakes. That's why physicians are required to sign the report before it becomes a final part of the patient chart.

In a lot of provider organizations, certain doctors are chronically late about reviewing and signing their reports. This slows down the revenue cycle because bills can't be sent out without final documentation so the administrative staff have to nag the doctors to clear their backlogs.

I imagine where the speech to text listens to the final diagnosis (or even the consultation) and summarizes everything in a PDF. Of course privacy aware (maybe some local hosted form).

And then the doctors double checks and signs everything. I feel like, often you go to the doctor an 80% of the time they stare at the screen and type something. If this could get automated and more time is spent on the patient, great!

None of those options are off-device.

> now that Dragon / Whisper / your phone can do Speech to text quickly and relatively reliably.

It’s less accurate and much slower than a human typist (or 3) typing dictated reports.

Tested over years in an MSK radiology clinic.