Don’t read the code, test for desired behavior, miss out on all the hidden undesired behavior injected by malicious prompts or AI providers. Brave new world!

You made me imagine AI companies maliciously injecting backdoors in generated code no one reads, and now I'm scared.

My understanding is that it's quite easy to poison the models with inaccurate data, I wouldn't be surprised if this exact thing has happened already. Maybe not an AI company itself, but it's definitely in the purview of a hostile actor to create bad code for this purpose. I suppose it's kind of already happened via supply chain attacks using AI generated package names that didn't exist prior to the LLM generating them.

One mitigation might be to use one company's model to check the work of another company's code and depend on market competition to keep the checks and balances.

Then how many models deep do you go before it's more cost effective to just hire a junior dev, supply them with a list of common backdoors, and have them scan the code?

What about writing the actual code yourself

Nah, more fun to burn money.

Already happening in the wild