I know quite some people in the safety/aviation domain that kind of dislike the subranges, as it inserts run-time checks that are not easily traceable to source code, thus escaping the trifecta of requirements/tests/source-code (which all must be traceable/covered by each other).
Weirdly, when going through the higher assurance levels in aviation, defensive programming becomes more costly, because it complicates the satisfaction of assurance objectives. SQLite (whiches test suite reaches MC/DC coverage which is the most rigorous coverage criterion asked in aviation) has a nice paragraph on the friction between MC/DC and defensive programming:
https://www.sqlite.org/testing.html#tension_between_fuzz_tes...
Ideally, a compiler can statically prove that values stay within the range; it's no different than proving that values of an enumeration type are valid. The only places where a check is needed are conversions from other types, which are explicit and traceable.
If you have
The type of c could be u8 in 0..200. If you have holes in the middle, same applies. Which means that if you want to make c u8 between 0..100 you'd have to explicitly clamp/convert/request that, which would have to be a runtime check.In your example we have enough information to know that the addition is safe. In SPARK, if that were a function with a and b as arguments, for instance, and you don't know what's being passed in you make it a pre-condition. Then it moves the burden of proof to the caller to ensure that the call is safe.
But obviously the result of a + b is [0..200], so an explicit cast, or an assertion, or a call to clamp() is needed if we want to put it back into a [0..100].
Comptime constant expression evaluation, as in your example, may suffice for the compiler to be able to prove that the result lies in the bounds of the type.
That's pohibitively expensive in the general case when external input is used and/or when arithmetic is used on the values (main differerence to sum-types).
But if the number type’s value can change at runtime as long as it stays within the range, thus may not always be possible to check at compile time.
The branch of mathematics you need to compute the bounds of the result of an operation is called Interval Arithmetic [1]. I'm not sure of where its limits are (hah), but at the very least it provides a way to know that [0,2] / [2,4] must be within [0,1].
I see there's some hits for it on libs.rs, but I don't know how ergonomic they are.
[1] https://en.wikipedia.org/wiki/Interval_arithmetic
This is basically excuses being made by C people for use of a language that wasn't designed for and isn't suitable for safety critical software. "We didn't even need that feature!"
Ada's compile time verification is very good. With SPARK it's even better.
Runtime constraints are removable via Pragma so there's no tradeoff at all with having it in the language. One Pragma turns them into static analysis annotations that have no runtime consequences.
I like how better more reliable code is more expensive to certify and the problem is the code and not the certification criteria/process being flawed.
> as it inserts run-time checks that are not easily traceable to source code
Modifying a compiler to emit a message at every point that a runtime check is auto-inserted should be pretty simple. If this was really that much of an issue it would have been addressed by now.