> Agree with everything else you said except this.
Yeah, I'm not sure I agree with what I said there myself!
> Incorrect, but not visibly incorrect, code will sit in place for years.
If you let incorrect code sit in place for years I think that suggests a gap in your wider process somewhere.
I'm still trying to figure out what closing those gaps looks like.
The StrongDM pattern is interesting - having an ongoing swarm of testing agents which hammer away at a staging cluster trying different things and noting stuff that breaks. Effectively an agent-driven QA team.
I'm not going to add that to the guide until I've heard it working for other teams and experienced it myself though!
This kinda gets into the idea of AIs as droids right?
So, you have a code writing droid that is aligned towards writing good clean code that humans can read. Then you have an implementation droid that goes into actually launching and running the code and is aligned with business needs and expenses. And you have a QA droid that stress tests the code and is aligned with the hacker mindset and is just slightly evil, so to speak.
Each droid is working together to make good code, but also are independent and adversarial in the day to day.
These are just agents with a different name ? People have been working like that today.
Theoretically I'd want a totally different model cross checking the work at some point, since much like an individual may have blind spots, so will a model.