We know about physical-layer attacks that break some of the abstractions that software relies on, allowing an attacker to use physical access or physical proximity to violate security guarantees that are enforced by software alone. (I worked on some of these a while ago!)
For a purely remote attacker (although maybe we have to get clear on what distance counts as "physical proximity" because we need to clarify what phenomena spy satellites, for example, can observe), it seems pretty straightforward to me that there is such a thing as actually secure software.
You can make a very strong model of what the software computes and then prove that it never does some undesired thing. It's not common to do this at all, and even formal verification work may not use very strong models or models that capture some important part of the behavior, but it is possible to mathematically reason about what software does and doesn't or can and can't do.
To summarize some of the problems that I partly just mentioned (in no particular order)
(1) We may not have the will, the skill, or the economic demand to make software secure in a very strong sense.
(2) Attackers may subvert our infrastructure or organizations so that we don't actually apply the processes or controls, or run the software, that we expect.
(3) Physical proximity (for active or passive attacks) might sometimes include distances that are actually attainable for attackers. Maybe there are passive or active attacks involving lasers that can be mounted from multiple kilometers away, as an example. In that case most software users might not be able to be sufficiently isolated from the attackers to be protected against those attacks.
(4) Software or hardware other than the specific software whose security we're talking about might be compromised in its supply chain in a way that people don't have a plan, or resources, to detect or mitigate.
(5) Some systems might be compositionally insecure (their pieces might be secure in some relevant model, but the pieces might interact in a way that isn't secure overall, for example related to timing and concurrency problems).
(6) Our proofs of security for cryptosystems rely on unproven hardness assumptions for various primitives, some of which might turn out to be wrong.
(7) Some security properties, especially related to communications security, might be inherently unattainable even with correct software. For example, there's an argument that Roger Dingledine (Tor lead developer) once told me about that implies that no anonymity system is perfectly secure in the long run against a very powerful active adversary, unless the system is willing to make extreme trade-offs like shutting down completely in response to any attack. So it might be that we can't actually build any useful communications system that can absolutely guarantee perfect traffic analysis resistance, essentially because of inherent architectural trade-offs.
But I don't want to lose sight of the idea that you can actually meaningfully reason about what software does and so there is such a thing as the software being correct or incorrect, relative to some specification or goal for its behavior, and correct software actually does exist (which computes correct outputs for every input).
The attack being physical doesn't mean you need physical access.
You can affect memory cells for example by repeatedly writing to cells next to them.
So if you have write access to some part of memory, you can affect memory you might not have access to because of hardware effects. This breaks assumptions in software but can be done completely remotely.
Thanks, that's an important point.
I guess that branch prediction attacks are an analogous phenomenon, although slightly less physical.