My (extensive) experience with LLM code generation is that it has the same issues you describe in your field. Hallucinations, over-engineering, misses important requirements/patterns.
But engineers have these same problems. The key is that the content creator (engineers for codegen, doctors for medicine) is still responsible for the output of the AI, as if they wrote it themselves. If they make a mistake with an AI (eg, include false data - hallucinations), they should be held accountable in the same way they would if they made a mistake without it.
Okay but since we know how humans actually behave, they will fully trust the indeterministic machine and give away their thinking. Sadly there is a large swath of humans that will act like this, maybe 20-30%.
Are you willing to put your life in the hands of these people fully using the machines to do everything?
Acting like that smart people aren't getting one shot'ed by these machines is very dangerous. Even worse is how quickly your skills actual degrade. If knew my doctor was using anything LLM related, I would switch doctors.