Can you help me understand the context in which this would be far more beneficial from having a validation function, like this in Java:
int validate(int age) {
if (age <= 200) return ago;
else throw Error();
}
int works = validate(200);
int fails = validate(201);
int hmmm = works + 1;
To elaborate on siblings compile time vs run time answer: if it fails at compile time you'll know it's a problem, and then have the choice to not enforce that check there.
If it fails at run time, it could be the reason you get paged at 1am because everything's broken.
It’s not just about safety, it’s also about speed. For many applications, having to check the values during runtime constantly is a bottleneck they do not want.
Like other sibling replies said, subranges (or more generally "Refinement types") are more about compile-time guarantees. Your example provides a good example of a potential footgun: a post-validation operation might unknowingly violate an invariant.
It's a good example for the "Parse, don't validate" article (https://lexi-lambda.github.io/blog/2019/11/05/parse-don-t-va...). Instead of creating a function that accepts `int` and returns `int` or throws an exception, create a new type that enforces "`int` less than equal 200"
Something like this is possible to simulate with Java's classes, but it's certainly not ergonomic and very much unconventional. This is beneficial if you're trying to create a lot of compile-time guarantees, reducing the risk of doing something like `hmmm = works + 1;`.These kind of compile-time type voodoo requires a different mindset compared to cargo-cult Java OOP. Whether something like this is ergonomic or performance-friendly depends on the language's support itself.
It’s a question of compile time versus runtime.
Yeah it’s something that code would compile down to. You can skip Java and write assembly directly, too.