Ideally, a compiler can statically prove that values stay within the range; it's no different than proving that values of an enumeration type are valid. The only places where a check is needed are conversions from other types, which are explicit and traceable.
If you have
The type of c could be u8 in 0..200. If you have holes in the middle, same applies. Which means that if you want to make c u8 between 0..100 you'd have to explicitly clamp/convert/request that, which would have to be a runtime check.In your example we have enough information to know that the addition is safe. In SPARK, if that were a function with a and b as arguments, for instance, and you don't know what's being passed in you make it a pre-condition. Then it moves the burden of proof to the caller to ensure that the call is safe.
But obviously the result of a + b is [0..200], so an explicit cast, or an assertion, or a call to clamp() is needed if we want to put it back into a [0..100].
Comptime constant expression evaluation, as in your example, may suffice for the compiler to be able to prove that the result lies in the bounds of the type.
That's pohibitively expensive in the general case when external input is used and/or when arithmetic is used on the values (main differerence to sum-types).
But if the number type’s value can change at runtime as long as it stays within the range, thus may not always be possible to check at compile time.
The branch of mathematics you need to compute the bounds of the result of an operation is called Interval Arithmetic [1]. I'm not sure of where its limits are (hah), but at the very least it provides a way to know that [0,2] / [2,4] must be within [0,1].
I see there's some hits for it on libs.rs, but I don't know how ergonomic they are.
[1] https://en.wikipedia.org/wiki/Interval_arithmetic