> Prompt instructions like "never do X" don't hold up. LLMs ignore them when context is long or users push hard.

Serious question. Assuming you knew this, why did you choose to use LLMz for this job?

Fair. We didn't choose LLMs to enforce rules — we chose them to understand intent. The enforcement happens outside the LLM entirely. That's the separation that actually holds up in production

> we chose them to understand intent

Yet they don't understand the intent of "Never do X" ?

Understanding intent and following instructions are different failure modes. LLMs are good at the first, unreliable at the second. That's exactly why enforcement lives outside the LLM.

Software engineering has a word for that.

Kludge.

Good luck!