I'm beginning to suspect robust automated tests may be one of the single strongest indicators for if you're going to have a good time with LLM coding agents or not.
If there's a test suite for the thing to run it's SO much less likely to break other features when it's working. Plus it can read the tests and use them to get a good idea about how everything is supposed to work already.
Telling Claude to write the test first, then execute it and watch it fail, then write the implementation has been giving me really great results.