There is also the fact that it's very easy to plant backdoors in LLMs with plausible deniability :

- You can just use the same tools you use to train them to make them behave in some specific ways if some specific preconditions are met.

- You can also poison the training data, so that the LLMs are writing flawed code they are convinced is right because they saw it on some obscure blog but in fact it had some subtle flaw you planted.

- You can poison the prompts as they are automatically injected from "skills" found online.

You couple that with long running agents which may drift very var from the conditions where they were tested during the safety tests.

You add the fact that in this AI race war, there is some premium to run agents capable of advanced offensive security with full permission, pushed using yolo dark-pattern.

The training process is obscure and expensive so only really doable by big actors non replicable and non verifiable.

And of course, now safe developers (aka those not taking the insane risk of running what really is and should be called malware), can't get jobs, get no visibility for any of their work, drown into a sea of AI slop made using a prompt and a credit card, and therefore they must sell their soul.md and hype for the madness.