Ada has some really good ideas which its a shame never took off or got used outside of the safety critical community that mostly used it. The ability to make number types that were limited in their range is really useful for certain classes of bugs. Spark Ada was a relatively easy substandard to learn and apply to start to develop software that was SIL 4 compliant.

I can't help but feel that we just went through a huge period of growth at all costs and now there is a desire to return, after 30-years of anything goes, to trying to make software that is safer again. Would be nice to start to build languages based on all the safety learnings over the decades to build some better languages, the good ideas keep getting lost in obscure languages and forgotten about.

Yes, we re-invent the wheel. The more time you spend writing software for a living, the more you will see the wheel re-invented. But Ada and Rust are safe under different definitions of safety. I view Rust as having a more narrow definition of safety, but a very important notion of safety, and executed with brutal focus. While Ada's definition of safety being broader, but better suited to a small subset of use cases.

I write Rust at work. I learned Ada in the early 1990s as the language of software engineering. Back then a lot of the argument against Ada was it was too big, complex, and slowed down development too much. (Not to mention the validating Ada 83 compiler I used cost about $20,000 a seat in today's money). I think the world finally caught up with Ada and we're recognizing that we need languages every bit as big and complex, like Rust, to handle issues like safe, concurrent programming.

I don't know Ada; care to explain why its definition of safety is broader than Rust?

I agree Rust's safety is very clearly (and maybe narrowly) defined, but it doesn't mean there isn't focus on general correctness - there is. The need to define safety precisely arises because it's part of the language (`unsafe`).

Rust’s built-in notion of safety is intentionally focused on memory + data-race properties at compile time. logic, timing, and determinism are left to libraries and design. Ada (with SPARK & Ravenscar) treats contracts, concurrency discipline, and timing analysis as first-class language/profile concerns hence a broader safety envelope.

You may choose to think from safety guarantee hierarchy perspective like (Bottom = foundation... Top = highest assurance)

Layer 6: FORMAL PROOFS (functional correctness, no RT errors) Ada/SPARK: built-in (GNATprove) Rust: external tools (Kani, Prusti, Verus)

Layer 5: TIMING / REAL-TIME ANALYSIS (WCET, priority bounds) Ada: Ravenscar profile + scheduling analysis Rust: frameworks (RTIC, Embassy)

Layer 4: CONCURRENCY DETERMINISM (predictable schedules) Ada: protected objects + task priorities Rust: data-race freedom; determinism via design

Layer 3: LOGICAL CONTRACTS & INVARIANTS (pre/post, ranges) Ada: Pre/Post aspects, type predicates (built-in) Rust: type states, assertions, external DbC tools

Layer 2: TYPE SAFETY (prevent invalid states) Ada: range subtypes, discriminants Rust: newtypes, enums, const generics

Layer 1: MEMORY SAFETY & DATA-RACE FREEDOM Ada: runtime checks; SPARK proves statically Rust: compile-time via ownership + Send/Sync

As the OP mentioned, restricted number ranges:

    with Ada.Text_IO; use Ada.Text_IO;

    procedure Restricted_Number_Demo is

        -- Define a restricted subtype of Integer
        subtype Small_Positive is Integer range 1 .. 100;

        -- Define a restricted subtype of Float
        subtype Probability is Float range 0.0 .. 1.0;

        -- Variables of these restricted types
        X : Small_Positive := 42;
        P : Probability    := 0.75;

    begin
        Put_Line("X = " & Integer'Image(X));
        Put_Line("P = " & Float'Image(P));

        -- Uncommenting the following line would raise a Constraint_Error at runtime
        -- X := 200;

    end Restricted_Number_Demo;

Nim was inspired by Ada & Modula, and has subranges [1]:

  type
    Age = range[0..200]

  let ageWorks = 200.Age
  let ageFails = 201.Age
Then at compile time:

  $ nim c main.nim
  Error: 201 can't be converted to Age
[1] https://nim-lang.org/docs/tut1.html#advanced-types-subranges

I know quite some people in the safety/aviation domain that kind of dislike the subranges, as it inserts run-time checks that are not easily traceable to source code, thus escaping the trifecta of requirements/tests/source-code (which all must be traceable/covered by each other).

Weirdly, when going through the higher assurance levels in aviation, defensive programming becomes more costly, because it complicates the satisfaction of assurance objectives. SQLite (whiches test suite reaches MC/DC coverage which is the most rigorous coverage criterion asked in aviation) has a nice paragraph on the friction between MC/DC and defensive programming:

https://www.sqlite.org/testing.html#tension_between_fuzz_tes...

Ideally, a compiler can statically prove that values stay within the range; it's no different than proving that values of an enumeration type are valid. The only places where a check is needed are conversions from other types, which are explicit and traceable.

If you have

    let a: u8 is 0..100 = 1;
    let b: u8 is 0..100 = 2;
    let c = a + b;
The type of c could be u8 in 0..200. If you have holes in the middle, same applies. Which means that if you want to make c u8 between 0..100 you'd have to explicitly clamp/convert/request that, which would have to be a runtime check.

In your example we have enough information to know that the addition is safe. In SPARK, if that were a function with a and b as arguments, for instance, and you don't know what's being passed in you make it a pre-condition. Then it moves the burden of proof to the caller to ensure that the call is safe.

But obviously the result of a + b is [0..200], so an explicit cast, or an assertion, or a call to clamp() is needed if we want to put it back into a [0..100].

Comptime constant expression evaluation, as in your example, may suffice for the compiler to be able to prove that the result lies in the bounds of the type.

That's pohibitively expensive in the general case when external input is used and/or when arithmetic is used on the values (main differerence to sum-types).

But if the number type’s value can change at runtime as long as it stays within the range, thus may not always be possible to check at compile time.

The branch of mathematics you need to compute the bounds of the result of an operation is called Interval Arithmetic [1]. I'm not sure of where its limits are (hah), but at the very least it provides a way to know that [0,2] / [2,4] must be within [0,1].

I see there's some hits for it on libs.rs, but I don't know how ergonomic they are.

[1] https://en.wikipedia.org/wiki/Interval_arithmetic

[deleted]

This is basically excuses being made by C people for use of a language that wasn't designed for and isn't suitable for safety critical software. "We didn't even need that feature!"

Ada's compile time verification is very good. With SPARK it's even better.

Runtime constraints are removable via Pragma so there's no tradeoff at all with having it in the language. One Pragma turns them into static analysis annotations that have no runtime consequences.

I like how better more reliable code is more expensive to certify and the problem is the code and not the certification criteria/process being flawed.

> as it inserts run-time checks that are not easily traceable to source code

Modifying a compiler to emit a message at every point that a runtime check is auto-inserted should be pretty simple. If this was really that much of an issue it would have been addressed by now.

Can you help me understand the context in which this would be far more beneficial from having a validation function, like this in Java:

  int validate(int age) { 
    if (age <= 200) return ago;
    else throw Error();
  }

  int works = validate(200);
  int fails = validate(201);

  int hmmm = works + 1;

To elaborate on siblings compile time vs run time answer: if it fails at compile time you'll know it's a problem, and then have the choice to not enforce that check there.

If it fails at run time, it could be the reason you get paged at 1am because everything's broken.

It’s not just about safety, it’s also about speed. For many applications, having to check the values during runtime constantly is a bottleneck they do not want.

Like other sibling replies said, subranges (or more generally "Refinement types") are more about compile-time guarantees. Your example provides a good example of a potential footgun: a post-validation operation might unknowingly violate an invariant.

It's a good example for the "Parse, don't validate" article (https://lexi-lambda.github.io/blog/2019/11/05/parse-don-t-va...). Instead of creating a function that accepts `int` and returns `int` or throws an exception, create a new type that enforces "`int` less than equal 200"

  class LEQ200 { ... }
  LEQ200 validate(int age) throws Exception {
      if (age <= 200) return age;
      else throw Exception();
  }

  LEQ200 works = validate(200);
  // LEQ200 fails = validate(201);
  // LEQ200 hmmm = works + 1; // Error in Java
  LEQ hmmm = works.add(1); // Throws an exception or use Haskell's Either-type / Rust's Result-type
Something like this is possible to simulate with Java's classes, but it's certainly not ergonomic and very much unconventional. This is beneficial if you're trying to create a lot of compile-time guarantees, reducing the risk of doing something like `hmmm = works + 1;`.

These kind of compile-time type voodoo requires a different mindset compared to cargo-cult Java OOP. Whether something like this is ergonomic or performance-friendly depends on the language's support itself.

It’s a question of compile time versus runtime.

Yeah it’s something that code would compile down to. You can skip Java and write assembly directly, too.

What happens when you add 200+1 in a situation where the compiler cannot statically prove that this is 201?

Your example also gets evaluated at comptime. For more complex cases I wouldn't be able to tell you, I'm not the compiler :) For example, this get's checked:

  let ageFails = (200 + 2).Age
  Error: 202 can't be converted to Age
If it cannot statically prove it at comptime, it will crash at runtime during the type conversion operation, e.g.:

  import std/strutils

  stdout.write("What's your age: ")
  let age = stdin.readLine().parseInt().Age
Then, when you run it:

  $ nim r main.nim
  What's your age: 999
  Error: unhandled exception: value out of range: 999 notin 0 .. 200 [RangeDefect]

Exactly this. Fails at runtime. Consider rather a different example: say the programmer thought the age were constrained to 110 years. Now, as soon as a person is aged 111, the program crashes. Stupid mistake by a programmer assumption turns into a program crash.

Why would you want this?

I mean, we've recently discussed on HN how most sorting algorithms have a bug for using ints to index into arrays when they should be using (at least) size_t. Yet, for most cases, it's ok, because you only hit the limit rarely. Why would you want to further constrain the field, would it not just be the source of additional bugs?

Once the program is operating outside of the bounds of the programmers assumption, it’s in an undefined state that may cause a crash to happen at a later point of time in a totally different place.

Making the crash happen at the same time and space as the error means you don’t have to trace a later crash back to the root cause.

This makes your system much easier to debug at the expense of causing some crashes that other systems might not have. A worthy trade off in the right context.

Out of bounds exception is ok to crash the program. User input error is not ok to crash the program.

I could go into many more examples but I hope I am understood. I think these hard-coded definition of ranges at compile time are causes of far more issues than they solve.

Let's take a completely different example: size of a field in a database for a surname. How much is enough? Turns out 128 varchars is not enough, so now they've set it to 2048 (not a project I work(ed) on, but am familiar with). Guess what? Not in our data set, but theoretically, even that is not enough.

> Out of bounds exception is ok to crash the program. User input error is not ok to crash the program.

So you validate user input, we've known how to do that for decades. This is a non-issue. You won't crash the program if you require temperatures to be between 0 and 1000 K and a user puts in 1001, you'll reject the user input.

If that user input crashes your program, you're not a very good programmer, or it's a very early prototype.

I think, if I am following things correctly, you will find that there's a limit to the "validate user input" argument - especially when you think of scenarios where multiple pieces of user input are gathered together and then have mathematical operations applied to them.

eg. If the constraint is 0..200, and the user inputs one value that is being multiplied by our constant, it's trivial to ensure the user input is less than the range maximum divided by our constant.

However, if we are having to multiply by a second, third... and so on.. piece of user input, we get to the position where we have to divide our currently held value by a piece of user input, check that the next piece of user input isn't higher, and then work from there (this assumes that the division hasn't caused an exception, which we will need to ensure doesn't happen.. eg if we have a divide by zero going on)

[deleted]

I mean, yeah. If you do bad math you'll get bad results and potentially crashes. I was responding to someone who was nonsensically ignoring that we validate user input rather than blindly putting it into a variable. Your comment seems like a non sequitur in this discussion. It's not like the risk you describe is unique to range constrained integer types, which is what was being discussed. It can happen with i32 and i64, too, if you write bad code.

Hmm, I was really pointing at the fact that once you get past a couple of pieces of user input, all the validation in the world isn't going to save you from the range constraints.

Assuming you want a good faith conversation, then the idea that there's bad math involved seems a bit ludicrous

I believe that the solution here is to make crashes "safe" eg with a supervisor process that should either never crash or be resumed quickly and child processes that handle operations like user inputs.

This together with the fact that the main benefit of range types is on the consumption side (ie knowing that a PositiveInt is not 0) and it is doable to use try-catch or an equivalent operation at creation time

For some reason your reply (which I think is quite good) makes me think of the adage "Be liberal in what you accept, and conservative in what you send" (Postels law).

Speaking as someone that's drunk the Go kool aid - the (general) advice is not to panic when it's a user input problem, only when it's a programmers problem (which I think is a restatement of your post)

Happens with DB constraints all the time, user gets an error and at least his session, if not whole process, crashes. And yes that too is considered bad code that needs fixing.

[deleted]
[deleted]

> Stupid mistake by a programmer assumption turns into a program crash.

I guess you can just catch the exception in Ada? In Rust you might instead manually check the age validity and return Err if it's out of range. Then you need to handle the Err. It's the same thing in the end.

> Why would you want to further constrain the field

You would only do that if it's a hard requirement (this is the problem with contrived examples, they make no sense). And in that case you would also have to implement some checks in Rust.

Also, I would be very interested to learn the case for hard requirement for a range.

In almost all the cases I have seen it eventually breaks out of confinement. So, it has to be handled sensibly. And, again, in my experience, if it's built into constraints, it invarianly is not handled properly.

Consider the size of the time step in a numerical integrator of some chemical reaction equation, if it gets too big the prediction will be wrong and your chemical plant could explode.

So too big times steps cannot be used, but constant sized steps is wasteful. Seems good to know the integrator can never quietly be wrong, even if you have to pay the price that tge integrator could crash.

[deleted]

Exactly, but how do you catch the exception? One exception catch to catch them all, or do you have to distinguish the types?

And yes... error handle on the input and you'd be fine. How would you write code that is cognizant enough to catch outofrange for every +1 done on the field? Seriously, the production code then devolves into copying the value into something else, where operations don't cause unexpected exceptions. Which is a workaround for a silly restriction that should not reside in runtime level.

> Why would you want this?

Logic errors should be visible so they can be fixed?

How does this work for dynamic casting? Say like if an age was submitted from a form?

I assume it’s a runtime error or does the compiler force you to handle this?

If you're using SPARK, it'll catch at compile time if there's ever a possibility that it would fit within that condition. Otherwise it'll throw an exception (constraint_error) during runtime for you to catch.

Isn’t this just Design By Contract from Eiffel just in another form?

No, range types are at best a very limited piece of DbC. Design by Contract lets you state much more interesting things about your program. It's also available in Ada, though.

https://learn.adacore.com/courses/intro-to-ada/chapters/cont...

Ada, or at least GNAT, also supports compile-time dimensional analysis (unit checking). I may be biased, because I mostly work with engineering applications, but I still do not understand why in other languages it is delegated to 3rd party libraries.

https://docs.adacore.com/gnat_ugn-docs/html/gnat_ugn/gnat_ug...

F# can do this too.

https://learn.microsoft.com/en-us/dotnet/fsharp/language-ref...

A whole zoo of dimensional analysis in programming languages : https://www.gmpreussner.com/research/dimensional-analysis-in...

Nice, didn't know that. I keep seeing praise for F# so I should finally take a look.

One of the best software design books I've read is "Domain Modelling Made Functional", from Scott Wlaschin. It's about F#, but it remains a good read for any software programmer, whatever your language. And it's easily understandable, you can almost read it like a novel, without focusing too much. Though what may need some brains is applying the functional concepts of this book with your favourite language...

#f!!

Nim (https://nim-lang.org), mentioned elsethread Re: numeric ranges like Ada, only needs a library for this: https://github.com/SciNim/Unchained

FWIW, physical dimensions like meters were the original apples-to-oranges type system that pre-dates all modern notions of things beyond arithmetic. I'm a little surprised it wasn't added to early FORTRAN. In a different timeline, maybe. :)

I think what is in "the" "stdlib" or not is a tricky question. For most general/general purpose languages, it can be pretty hard to know even the probability distribution of use cases. So, it's important to keep multiple/broad perspectives in mind as your "I may be biased" disclaimer. I don't like the modern (well, it kind of started with CTAN where the micros seemed meant more for copy-paste and then CPAN where it was not meant for that) trend toward dozens to hundreds of micro-dependencies, either, though. I think Python, Node/JS, and Rust are all known for this.

> The ability to make number types that were limited in their range is really useful for certain classes of bugs.

This is a feature I use a lot in C++. It is not part of the standard library but it is trivial to programmatically generate range-restricted numeric types in modern C++. Some safety checks can even be done at compile-time instead of runtime.

It should be a standard feature in programming languages.

I've never come across any range restricting constructions in C++ projects in the wild before. It truly is a shame, I think it's something more programmers should be aware of and use. Eliminating all bounds checking and passing that job to the compiler is pretty killer and eliminates whole classes of bugs.

This is an unfortunate reality. C++ has evolved into a language capable of a surprisingly deep compile-time verification but almost no one uses that capability. It reflects somewhat negatively on the C++ developer community that problems easily solved within the language are routinely not solved, though the obsession with backward compatibility with old versions of the language plays a role. If you fully leverage it, I would argue that recent versions of C++ are actually the safest systems language. Nonetheless, almost no one has seen code bases that leverage that verification capability to its maximum. Most people have no clue what it is capable of.

There is the wisdom that it is impossible to deliver C++ without pervasive safety issues, for which there are many examples, and on the other hand there are people delivering C++ in high-assurance environments with extremely low defect rates without heroic efforts. Many stories can be written in that gap. C++ can verify many things that are not verifiable in Rust, even though almost no one does.

It mostly isn’t worth the argument. For me, C++20 reached the threshold where it is practical to design code where large parts can be formally verified in multiple ways. That’s great, this has proven to be robust in practice. At the same time, there is an almost complete absence of such practice in the C++ literature and zeitgeist. These things aren’t that complex, the language users are in some sense failing the language.

The ability to codegen situationally specific numeric types is just scratching the surface. You can verify far weirder situational properties than numeric bounds if you want to. I’m always surprised by how few people do.

I used to be a C++ hater. Modern C++ brought me back almost purely because it allows rich compile-time verification of correctness. C++11 was limited but C++20 is like a different world.

> C++ can verify many things that are not verifiable in Rust, even though almost no one does.

Do you have an example of this? I'm curious where C++ exceeds Rust in this regard.

The thing I miss most from Ada is its clear conception of object-orientation. Every other language bundles all of OOP into the "class" idea, but Ada lets you separately opt in to message sending, dynamic dispatch, subtyping, generics, etc. In Ada, those are separate features that interact usefully, rather than one big bundle.

> The ability to make number types that were limited in their range is really useful for certain classes of bugs.

Yes! I would kill to get Ada's number range feature in Rust!

It is worked on under the term "pattern types" mainly by Oli oli-obk Scherer I think, who has an Ada background.

Can't tell you what the current state is but this should give you the keywords to find out.

Also, here is a talk Oli gave in the Ada track at FOSDEM this year: https://hachyderm.io/@oli/113970047617836816

AFAIK the current status is that it's internal to std (used to implement `NonNull` and friends) and not planned to be exposed.

There were some talks about general pattern type, but it's not even approved as an experiment, not to talk about RFC or stabilization.

That feature is actually from Pascal, and Modula-2, before making its way into Ada.

For some strange reason people always relate to Ada for it.

I would guess that Ada is simply more known. Keep in mind that tech exploded in the past ~3.5 decades whereas those languages are much older and lost the popularity contest. If you ask most people about older languages, the replies other than the obvious C and (kind of wrong but well) C++ are getting thin really quickly. COBOL, Ada, Fortran, and Lisp are probably what people are aware of the most, but other than that?

You've forgotten about BASIC, SNOBOL, APL, Forth, and PL/1. There were many interesting programming languages back then. Good times!

The first five languages I learned back in the 70s: FORTRAN, Pascal, PL/I, SNOBOL, APL. Then I was an Ada and Icon programmer in the 80s. In the 90s, it was C/C++ and I just never had the enthusiasm for it.

Icon (which came from SNOBOL) is one of the few programming languages I consider to embody truly new ideas. (Lisp, Forth, and Prolog are others that come to mind.)

Icon is an amazing language and I wish it was better known.

You probably know this but, for anyone else who is interested, the easiest way to get a feel for Icon nowadays may be through its descendant Unicon which is available at unicon.org.

I found Pascal more readable as a budding programmer. Later on, C's ability to just get out of the way to program what I wanted trumped the Pascal's verbosity and opinionatedness.

I admit that the terseness of the syntax of C can be off-putting. Still, it's just syntax, I am sorry you were disuaded by it.

True.

I dabbled in some of them during some periods when I took a break from work. And also some, during work, in my free time at home.

Pike, ElastiC (not a typo), Icon, Rebol (and later Red), Forth, Lisp, and a few others that I don't remember now.

Not all of those are from the same period, either.

Heck, I can even include Python and Ruby in the list, because I started using them (at different times, with Python being first) much before they became popular.

For me it's because I learned Ada in college.

18 year old me couldn't appreciate how beautiful a language it is but in my 40s I finally do.

2000-2005 College was teaching Ada?

2005-2010 my college most interesting (in this direction) language was Haskell. I don't think that there was any other language (like Ada) being taught)

Yes, I learned it in a course that surveyed a bunch of different languages like Ada, Standard ML, and Assembly

Ada is sometimes taught as part of a survey course in Programming Languages. That’s how I learned a bit about it.

Turbo Pascal could check ranges on assignment with the {$R+} directive, and Delphi could check arithmetic overflow with {$Q+}. Of course, nobody wanted to waste the cycles to turn those on :)

Most Pascal compilers could do that actually.

Yeah not wanting to waste cycles is how we ended up with the current system languages, while Electron gets used all over the place.

I would argue that was one of the reasons why those languages lost.

I distinctly remember arguments for functions working on array of 10. Oh, you want array of 12? Copy-paste the function to make it array of 12. What a load of BS.

It took Pascal years to drop that constraint, but by then C had already won.

I never ever wanted the compiler or runtime to check a subrange of ints. Ever. Overflow as program crash would be better, which I do find useful, but arbitrary ranges chosen by programmer? No thanks. To make matters worse, those are checked even by intermediate results.

I realize this is opinioned only on my experience, so I would appreciate a counter example where it is a benefit (and yes, I worked on production code written in Pascal, French variant even, and migrating it to C was hilariously more readable and maintainable).

> I never ever wanted the compiler or runtime to check a subrange of ints. Ever.

Requiring values to be positive, requiring an index to fall within the bounds of an array, and requiring values to be non-zero so you never divide by zero are very, very common requirements and a common source of bugs when the assumptions are violated.

Thankfully instead of overflow, C gets you the freedom of UB based optimizations.

Funny :)

It still results in overflow and while you are right that it's UB by the standard, it's still pretty certain what will happen on a particular platform with a particular compiler :)

No, optimizing compilers don't translate overflow to platform-specific behavior for signed integers - since it's UB they'll freely make arithmetic or logic assumptions that can result in behavior that can't really be humanly predicted without examining the generated machine code.

They are free to but not required. You can pick a different compiler, or you can configure your compiler to something else, if it provides such options.

I always found it surprising that people did not reject clang for aggressively optimizing based on UB, but instead complained about the language while still using clang with -O3.

Programmers don’t have much choice, since most compilers don’t really provide an option / optimization level that results in sane behavior for common UB footguns while providing reasonable levels of performance optimization.

The one exception I know of is CompCert but it comes with a non-free license.

I definitely do think the language committee should have constrained UB more to prevent standards-compliant compilers from generating code that completely breaks the expectations of even experienced programmers. Instead the language committees went the opposite route, removing C89/90 wording from subsequent standards that would have limited what compilers can do for UB.

The C89/C90 wording change story is a myth. And I am not sure I understand your point about CompCert. The correctness proof of CompCert covers programs that have no UB. And programmers do have some choice and also some voice. But I do not see them pushing for changes a lot.

The choice is going for other languages because they don't believe WG14, or WG21 will ever sort this out, as many are doing nowadays.

This is my point, programmers apparently fail to understand that they would need to push for changes at the compiler level. The committee is supposed to standardize what exist, it has no real power to change anything against the will of the compiler vendors.

FYI all major C compilers have flags to enforce the usual two's-complement rollover, even if you enable all optimizations. I always enable at least fwrapv, even when I know that the underlying CPU has well defined overflow behavior (gcc knows this so the flag presumably becomes a no-op, but I've never validated that thought).

gcc has -fwrapv and -f[no-]strict-overflow, clang copied both, and MSVC has had a plethora of flags over the years (UndefIntOverflow, for example) so your guess is as good as mine which one still works as expected.

compile time user config checking?

Sorry? That's not possible...

I've seen it plenty of times. safety critical controllers have numeric bounds of stability. why wouldn't you want to encode that into the type

There is RFC but I guess the work stopped.

As a sibling comment[0] mentioned, pattern types are actively being worked on.

[0] https://news.ycombinator.com/item?id=45474777

Oh. I thought it stalled since there was a long time without activity.

In my personal experience it's not just safety. Reliability of produced is also a big part.

Ime, being able to express constraints in a type systems yields itself to producing better quality code. A simple example from my experience with rust and golang is mutex handling, rust just won't let you leak a guard handle while golang happily let's you run into a deadlock.

>Ada has some really good ideas which its a shame never took off or got used outside of the safety critical community that mostly used it. The ability to make number types that were limited in their range is really useful for certain classes of bugs.

As pjmlp says in a sibling comment, Pascal had this feature, from the beginning, IIRC, or from an early version - even before the first Turbo Pascal version.

If I am not wrong, you could do a zero-cost abstraction in C++ and use user-defined literals if you wosh for nice syntax.

It doesn’t really compete in the same space as Ada or Rust but C# has range attributes that are similar, the only downside is you have to manually call the validation function unless you are using something like ASP.NET that does it automatically at certain times.

30+ years ago I was programming in Ada, and I feel the same way and have been repeatedly disappointed. Maybe this time around things will be different.