Back in 2021 I said in a Wired article that a malicious attacker could add exploits to projects on github to poison llm generated code. I knew it could happen but I didn't know it would require so few samples.

https://www.wired.com/story/ai-write-code-like-humans-bugs/