So the answer is that you are proving two things:
1. That the model/specification makes sense. i.e. that certain properties in the model hold and that it does what you expect.
2. That the SUV/SUT (system under verification/test) corresponds to the model. This encompasses a lot but really what you are doing here is establishing how your system interacts with the world, with what accuracy it does so, etc. And from there you are walking along the internal logic of your system and mapping your representations of the data and the algorithms you are using into some projection from the model with a specified error bound.
So you are inherently dealing with the discrete nature of the system the entire time but you can reason about that discrete value as some distribution of possible values that you carry through the system with each step either
- introducing some additional amount of error/variability or
- tightening the bound of error/variability but trapping outside values into predictable edge cases.
Then it's a matter of reasoning about those edge cases and whether they break the usefulness of the system compared against the idealised model.