Fair. We didn't choose LLMs to enforce rules — we chose them to understand intent. The enforcement happens outside the LLM entirely. That's the separation that actually holds up in production
Fair. We didn't choose LLMs to enforce rules — we chose them to understand intent. The enforcement happens outside the LLM entirely. That's the separation that actually holds up in production
> we chose them to understand intent
Yet they don't understand the intent of "Never do X" ?
Understanding intent and following instructions are different failure modes. LLMs are good at the first, unreliable at the second. That's exactly why enforcement lives outside the LLM.
Software engineering has a word for that.
Kludge.
Good luck!