It’s both interesting to see all the creative ways people find to exploit LLM-based systems, but also disappointing that to this day designers of these systems don’t want to accept that LLMs are inherently vulnerable to prompt injection and short of significant breakthroughs in AI interpretability will remain hopelessly broken regardless of ad-hoc “mitigations” they implement.

I am of the opinion LLMs are cognitive and task capability equivalent of a 5 year old. Actually that might be a harsh judgement since a child will succeed with practice.

aka LLMs can not learn from experience - this is a fundamental limitation. c.f - individuals with Korsakov's syndrome - who also confabulate in a similar manner.

So does a monkey or a dolphin, what's your point?