>What if finding every vulnerability in a piece of software were just as fast and easy as finding a few of them, thanks to automation?

This presumes there is such a thing as "every" vulnerability. It is possible that ever more sophisticated, complicated, and abstract attacks become possible/discoverable as one applies more intelligence to the problem.

IF it is indeed possible to make a piece of software completely secure, then yes, more intelligent systems make the situation better, because it will always be possible to audit a system before it is ever released and make it completely safe.

That is a very big if and, as far as I am aware, remains to be seen if it's the case

-edit- They mention this possibility themselves further down, so the authors know this is a completely speculative point/article. They don't even try to make an argument about why one possibility might be more likely than the other. This article is useless.

We know about physical-layer attacks that break some of the abstractions that software relies on, allowing an attacker to use physical access or physical proximity to violate security guarantees that are enforced by software alone. (I worked on some of these a while ago!)

For a purely remote attacker (although maybe we have to get clear on what distance counts as "physical proximity" because we need to clarify what phenomena spy satellites, for example, can observe), it seems pretty straightforward to me that there is such a thing as actually secure software.

You can make a very strong model of what the software computes and then prove that it never does some undesired thing. It's not common to do this at all, and even formal verification work may not use very strong models or models that capture some important part of the behavior, but it is possible to mathematically reason about what software does and doesn't or can and can't do.

To summarize some of the problems that I partly just mentioned (in no particular order)

(1) We may not have the will, the skill, or the economic demand to make software secure in a very strong sense.

(2) Attackers may subvert our infrastructure or organizations so that we don't actually apply the processes or controls, or run the software, that we expect.

(3) Physical proximity (for active or passive attacks) might sometimes include distances that are actually attainable for attackers. Maybe there are passive or active attacks involving lasers that can be mounted from multiple kilometers away, as an example. In that case most software users might not be able to be sufficiently isolated from the attackers to be protected against those attacks.

(4) Software or hardware other than the specific software whose security we're talking about might be compromised in its supply chain in a way that people don't have a plan, or resources, to detect or mitigate.

(5) Some systems might be compositionally insecure (their pieces might be secure in some relevant model, but the pieces might interact in a way that isn't secure overall, for example related to timing and concurrency problems).

(6) Our proofs of security for cryptosystems rely on unproven hardness assumptions for various primitives, some of which might turn out to be wrong.

(7) Some security properties, especially related to communications security, might be inherently unattainable even with correct software. For example, there's an argument that Roger Dingledine (Tor lead developer) once told me about that implies that no anonymity system is perfectly secure in the long run against a very powerful active adversary, unless the system is willing to make extreme trade-offs like shutting down completely in response to any attack. So it might be that we can't actually build any useful communications system that can absolutely guarantee perfect traffic analysis resistance, essentially because of inherent architectural trade-offs.

But I don't want to lose sight of the idea that you can actually meaningfully reason about what software does and so there is such a thing as the software being correct or incorrect, relative to some specification or goal for its behavior, and correct software actually does exist (which computes correct outputs for every input).

The attack being physical doesn't mean you need physical access.

You can affect memory cells for example by repeatedly writing to cells next to them.

So if you have write access to some part of memory, you can affect memory you might not have access to because of hardware effects. This breaks assumptions in software but can be done completely remotely.

Thanks, that's an important point.

I guess that branch prediction attacks are an analogous phenomenon, although slightly less physical.

It is not possible to make a piece of software completely secure because software sits atop hardware and hardware introduces its own security vulnerabilities that leak into software without possible recourse.

[deleted]

to say that defense doesn't win in the limit is the same thing as saying there is an attack that can not be defended against.

So to re-phase the question to more clearly have an answer: does there exist an attack which no one will ever be able (for all time) to come up with a defense against? (the very existence of such an attack would end the (open) internet, wholly and completely, if the only winning move is not to play...)

There will be an exhaustion of possibilities in the end. New attacks eventually run out after each surface area is hardened against those attacks.

In the limit, defense wins.

There is only one case (that i see) where this may fail. if there is a 'predicament' with the state of security: ie, if securing against attack A requires you to be insecure against attack B and vise versa (this could be a 'whack-a-mole with many different kinds of attacks' situation). But that would be 'provable'. So if such a case exists, we will know about it. And it may be true that predicaments like this could be exercised if they even can exist, we might still be able to avoid/mitigate them.

So large bets on defense winning in the end.