I don't think that this is a good idea. For medical applications, I can understand that LLMs are not the best solution, since they are so bad with numbers/probabilities. But for legal advice, I think they should be pretty good.
So the only reason I can think of to forbid such use cases is that people in those professions fear being replaced by machines.
Preventable medical errors kill 250,000 American every year, I can imagine LLMs could be both good and bad for that number, but on net, it is hard to say without just guessing. But if you ban the application of LLMs to medical care, you close that door before even seeing the potential on the other side. I think that is absurd.
I don't think that conclusion really follows because I don't think the ban works that way.
There's a big difference between ChatGPT writing a prescription and a doctor double checking his diagnosis using some kind of Claude code for medicine. ChatGPT writing prescriptions and giving medical device directly to people should absolutely be prohibited for now, but the second approach should be encouraged.
[dead]