The so-called "guardrails" used for LLM are very close to expert systems, imo.
Since the landscape of potentially malicious inputs in plain english is practically infinite, without any particular enforced structure for the queries you make of it, means that those "guardrails" are, in effect, an expert system. An ever growing pile of if-then statements. Didn't work then, won't work now.
The so-called "guardrails" used for LLM are very close to expert systems, imo.
Since the landscape of potentially malicious inputs in plain english is practically infinite, without any particular enforced structure for the queries you make of it, means that those "guardrails" are, in effect, an expert system. An ever growing pile of if-then statements. Didn't work then, won't work now.
People are trying to achieve the same thing - rules based systems with decision trees. That's still one of the most lucrative use cases.