I've found this a tractable approach, but sometimes not enough. My escalation pattern with cursor looks like this:

1. Document everything that you're doing and update a core architecture or technical doc that the LLM can read.

2. Update your .cursorrules with specific instructions for .cursor which can grow more specific as you nail down parts of your stack and the patterns you're following. This can be updated (by cursor itself) if you find the same problems recurring.

3. I have pre-commit script which runs some internal scripts. If I find the IDE is STILL making the same mistake after I've documented it and added cursor rules, the nuclear option is to add a script here which verifies the integrity of whatever construct is being violated (e.g. tests go into this folder structure, env variables are consistent between these files, import of this model that the LLMs like is forbidden)

I would add: any time you expect to be working on a particular feature / enhancement / refactor, have the LLM create a temporary document with a description and an implementation plan and work from that.

In addition: I have a specific workflow for resolving testing errors or pre-commit errors which follows the above pattern: Document each failure and work through them one at a time, running the test script and updating the document between runs.

I've established these patterns slowly with usage, but it has improved my experience a lot.