The “secure” system with unknown bugs can fix them once they become known. The system that’s insecure by design and tries to mitigate it can’t be fixed, by design.
There might be a zero-day bug in my browser which allows an attacker to steal my banking info and steal my money. I’m not very worried about this because I know that if such a thing is discovered, Apple is going to fix it quickly. And it’s going to be such a big deal that it’s going to make the news, so I’ll know about it and I can make an informed decision about what to do while I wait for that fix.
Computer security is fundamentally about separating code from data. Security vulnerabilities are almost always bugs that break through that separation. It may be direct, like with a buffer overflow into executable memory or a SQL injection, or it may be indirect with ROP and such. But one way or another, it comes down to getting the target to run code it’s not supposed to.
LLMs are fundamentally designed such that there is no barrier between the two. There’s no code over here and data over there. The instructions are inherently part of the data.