This problem is unrelated to CI and dev practices etc, this is about trusting the output of generative AI without reading it, then using it to handle patient data.

Vibe coding is just a bad idea, unless you’re willing and able to vet the output, which most people doing it are not.

> Vibe coding is just a bad idea, unless you’re willing and able to vet the output, which most people doing it are not.

It says quite a lot about where we are with ai tooling that none of the big players have “no need to review, certified for market X” offerings yet.

Fully agentic development is neat for scripts and utilities that you wouldn‘t have the time to do otherwise, where you can treat it as intput/output and check both.

In these cases you don’t necessarily care too much about the code itself, as long as it looks reasonable at a glance.

It is related to CI and dev practices etc. A experienced developer using AI would add security/data protection, even when vibe coding.