Maybe it's physically impossible to build a theoretically secure system, just as it's (presumably) impossible to have a cell that isn't susceptible to any virus. Maybe this whole time we've been getting away with a type of security by obscurity, where the obscurity is just no one having the time and focus to actually analyze the code.
Suppose the following:
1. Any given system has a finite number of findable vulnerabilities.
2. All findable vulnerabilities are fixable (if not in software then with a new hardware revision).
3. Fixing a vulnerability while keeping the same intended functionality introduces on average less than 1 other findable vulnerability.
4. It is possible to cease adding new features to a system and from that point forward only focus on fixing vulnerabilities.
If all 4 are true, then perfect security seems possible, in some sense. I think some vulnerabilities might not be fixable, if you include things like the idea that users can be tricked into revealing their passwords. If you restrict the definition of vulnerability to some narrower meaning that still captures most of what people mean when they say computer vulnerability, then I think those 4 statements are probably true.
Perfect security might be near impossible in practice because vulnerabilities will get more difficult to find and fix over time, but I think we should expect the discovery of vulnerabilities to eventually become arbitrarily slow in a hypothetical system that prioritized security above all else.
Systems generally evolve to add vulnerabilities.
[dead]
It's probably impossible to achieve security through correctness, but security through compartmentalization can work. See: https://qubes-os.org.
I would rather claim that building a theoretically secure system is prohibitively expensive. At the end of the day, Mythos et al. are just better tools for finding vulnerabilities that will eventually be available to both offensive and defensive actors.
If you imagine you had a vulnerability scanner as fast and convenient as a linter, it would be much cheaper to write secure code right away. Probably not perfectly secure, but still secure enough to make sure finding exploits stays expensive.
I would find it funny if one day we found it irresponsable to write hand generated production code. Just like it would be irresponsable to build a significan building without running numerical simulations.
it's probably less about how you write the code to begin with and more about letting a tool hammer on it
if you want to be a one man show handcrafting an artisan iOS app that will be fine, but you should probably let Claude bang against it for a while to shake out whatever bugs
This day is probably not long off. My prediction is before the end of 2028.
another "obscurity": I'm not valuable enough to be attacked, compared with the cost. But what if cost has been reduced a lot?