Information security is, fundamentally, a misalignment of expected capabilities with new technologies.

There is literally no way a new technology can be "secure" until it has existed in the public zeitgeist for long enough that the general public has an intuitive feel for its capabilities and limitations.

Yes, when you release a new product, you can ensure that its functionality aligns with expectations from other products in the industry, or analogous products that people are already using. You can make design choices where a user has to slowly expose themselves to more functionality as they understand the technology deeper, but each step of the way is going to expose them to additional threats that they might not fully understand.

Security is that journey. You can just release a product using a brand new technology that's "secure" right out of the gate.

I'm sorry but that's a pathetic excuse for what's going on here. These aren't some unpredictable novel threats that nobody could've reasonably seen coming.

Everyone who has their head screwed on right could tell you that this is an awful idea, for precisely these reasons, and we've known it for years. Maybe not their users if they haven't been exposed to LLMs to that degree, but certainly anyone who worked on this product should've known better, and if they didn't, then my opinion of this entire industry just fell through the floor.

This is tantamount to using SQL escaping instead of prepared statements in 2025. Except there's no equivalent to prepared statements in LLMs, so we know that mixing sensitive data with untrusted data shouldn't be done until we have the technical means to do it safely.

Doing it anyway when we've known about these risks for years is just negligence, and trying to use it as an excuse in 2025 points at total incompetence and indifference towards user safety.

+1

And if you tried it wouldn’t be usable, and you’d probably get the threat model wrong anyway.