Heh, didn't know there was a name for it...
What I've taken to lately is getting the robots to write "scientific papers" on what I want them to get up to so instead of iterating over broken code I can just ask them "does this change follow the specification?" Seems to stop them from doing overly stupid things...mostly.
Plus, since what I've been working on is just a mash-up of other people's ideas, it provides a good theoretical foundation of how all the different bits fit together. Just give them the paper you've been working on and some other paper and ask how the two can be used together, a lot of the time the two ideas aren't compatible so it saves a lot of time trying to force two thing to work when they really shouldn't. Very good way to explore different ideas without the robots going all crazy and producing a full code project (complete with test and build suites) instead of just giving a simple answer.
there is now I suppose! ;)
Yeah it isn't a panacea but it has afforded me less frustration than the alternative of jumping straight in.
> Since what I've been working on is just a mash-up of other people's ideas
Totally, I find most work I do, if I'm honest, is in this bucket. LLMs are pretty good at "filling in the gaps" between two ideas like this