Formal verification is ultimately an epistemological problem, how do we know that our model of reality corresponds to reality?

People have commented that validation is different from verification, but I think validation can be done by formally specifying the environment the program runs in.

For example, the question whether user can do X in your software corresponds to a question, is there a sequence of user inputs such that the software plus operating system leads to the output X.

By including the environment (like the operating system) into the formal spec we could answer (in theory) such questions.

Even if the question is fuzzy, we can today formalize it with AI models. For example, if we ask, does paint program allow user to draw a rose? We can take a definition of a rose from a neural network that recognizes what a rose is, and formally verify the system of paint program and rose verifier given by the NN.

> We can take a definition of a rose from a neural network that recognizes what a rose is

Can we? That just shifts the problem to building such a NN.

I think it would already be pretty tricky to get a decent sized panel of humans to agree on what a "rose" is. Do rose-shaped but yellow flowers count? How about those genetically altered thornless ones? How about perfectly imitatations that are made of plastic? Images of roses? The flowers from the list at https://www.epicgardening.com/rose-lookalikes/?

And of course there's the problem of getting a NN to recognise just roses. The mythical neural network with 0% error rate is exceptionally rare and usually not very useful for real world applications. I very much doubt humanity could build a neural network that perfectly recognises roses and only roses.

Now you’ve just shifted the problem to a much harder statement: proving the things the NN recognizes as a rose is actually a rose.

You cannot get rid of the enviroment, and the best thing you can do is be explicit in what you rely on. For example, formally verified operating systems often rely on hardware correctness, even though the errata sheets for modern CPUs are hundreds of pages long.

Yeah. That's why I am saying it's the epistemological problem.

In the above example, that NN recognizes rose as a rose is an assumption that it is correct as a model of the (part of the) world. On that assumption, we get a formal definition of "an image of a rose in the pixelated image", and we use that to formally prove our system allows roses to be painted.

But getting rid of that assumption is I believe epistemologically impossible; you have to assume some other correspondence with reality.

I agree with you on the hardware. I think the biggest obstacle to software formal correctness is the lack of formal models of which we can be confident describe our environment. (One obstacle is the IP laws - companies do not like to share the models of things they produce.)

Right, the problem is that one can't formally describe the environment, and any one thing has to interact with it. So formallness goes right out the window.

> how do we know that our model of reality corresponds to reality?

This is the main issue I have with formal methods in commercial settings. Most of the time, the issue is that the model reality map is far from accurate. If you have to rewrite your verification logic every time you have to update your tests, development would go very slowly

Formal verification is a pure logic problem. You are making a proof, and a proof is just a demonstration that you can reach a given expression given certain axioms and the rules of formal logic.

You can't just chuck a neural network into the mix because they aren't formally verified to do basically anything beyond matrix multiplication and the fact that they are universal approximators.