I think you're correct with accounting for the security "attributes" of these llms if you're going to use them, like you said, "taking that insecurity into account".
If we sit down and examine the statistics of bugs, the costs of their occurance in production and weighed everything with some reasonable criteria, I think we could somehow arrive at a reasonable level of confidence that allows us to ship a system to production. Some organizations do better with this than others of course. During a projects development cycle, we could watch out for common patterns, buffer overflows, use after free for c folks, sql injection or non escaping stuff in web programming but we know these are mistakes and we want to fix them.
With llms the mitigation that I'm seeing is that we reduce the errors 90 percent, but this is not a mitigation unless we also detect and prevent the other 10 percent. Its just much more straightforward to treat llms as untrusted, because they are, you're getting input from randos by virtue of its training data. producing mistaken output is not actually a bug, its actually expected behavior, unless you also believe in the tooth fairy lol
>To me, "we fix bugs" sounds the same as "we ship systems with unknown vulnerabilities".
to me, they sound different ;)