Their example of why Ada has better strong typing than Rust is that you can have floats for miles and floats for kilometers and not get them mixed up. News flash, Rust has newtype structs, and you can also do basically the same thing in C++.
I don't know much about Ada. Is its type system any better than Rust's?
This was posted to about a day ago: https://github.com/johnperry-math/AoC2023/blob/master/More_D...
But a noteworthy excerpt: ```
Ada programs tend to define types of the problem to be solved. The compiler then adapts the low-level type to match what is requested. Rust programs tend to rely on low-level types.
That may not be clear, so two examples may help:
By contrast, the Rust programs I've seen tend to specify types in terms of low-level, machine types. Thus, I tried to address the same problem using an f64. In this particular case, there were repercussions, but usually that works fine as long as you know what the machine types can do. You can index Rust types with non-integers, but it takes quite a bit more work than Ada.```
> By contrast, the Rust programs I've seen tend to specify types in terms of low-level, machine types.
This seems to be an artifact of the domain that Rust is currently being used in. I don't think it's anathema to Rust to evolve to be able to add some of these features. char indexed arrays are something I've used a lot (most via `char c - 'a'`\, but native support for it would be nice).
You can use TiVec to index with something other than usize. https://crates.io/crates/typed-index-collections
Ada's mechanism is what Fortran has been using and doing for decades.
F'77 added arbitrary lower bounds on arrays, including explicit-shaped and assumed-shaped dummy arrays. It is a useful and portable feature, though somewhat confusing to newcomers when they try to pass an array with non-default lower bounds as an actual argument and they don't work as one would expect.
F'90 added arbitrary lower bounds on assumed-shape dummy arrays, as well as on allocatables and pointers. Still pretty portable, though more confusing cases were added. F'2003 then added automatic (re)allocation of allocatables, and the results continue to astonish users. And only two compilers get them right, so they're not portable, either.
Ada's array indexing is part of its type system. Fortran's is not (for variables).
You very rarely would actually want scalar types which don't map directly to hardware supported ones anyway.
You can actually do this in C as well. The Windows API has all sorts of handle types that were originally all one type: HANDLE; but by wrapping a HANDLE in various one-member structs were able to derive different handle types that couldn't be intermixed with each other in a type-safe way without some casting jiggery-pokery.
It's just much, much easier and more ergonomic in Ada.
Fun fact, that many are not aware, mostly because this is Windows 3.x knowledge and one needed the right source to learn about this.
There was a header only library on the Windows SDK that would wrap those HANDLEs into more specific types, that would still be compatible, while providing a more high level API to use them from C.
Unfortunely there is not much left on the Internet about it, but this article provides some insight,
https://www.codeguru.com/windows/using-message-crackers-in-t...
Naturally it was saner just to use TP/C++ alongside OWL, C++ with MFC back then, or VB.
Yes and no, you need to look deeper into Ada to find that it can have compile time guarantees higher than what you can get from a struct named km and miles.
There is no elegant solution in Rust to make something like
at least that is an unsigned (though there are no usigned hardware floats). If you said tempemerature C there you range starts at -273.15 and you want errors of some sort to happen if you go below that.
Ideally, the program would freeze
newtypes are not as good as native low level types. after typing a lot of code, one will find out that he needs nightly to get decent integration to avoid casting to low level and back all time.
I'm super interested how you can do this in C++. Say, I need aggregate struct with a few 16 and 32 bit fields, some are little endian and some big endian. I do not want C++ to let me mix up endianness. How do I do it?
C: struct be32_t { uint32_t _ }; struct le32_t { uint32_t _ };
C++: That, but with a billion operator overloads and conversion operators so they feel just like native integers.
In C++ you probably could even make a templated class that implements all possible operators for any type that supports it with concepts. Then you can just `using kilometer = unique_type<uint32_t, "kilometer">` without needing to create a custom type each time.
Though if you do that km times km isn't km it is a volume - so your custom type would be wrong to have all operations. what unit km times km should be isn't clear.
These libraries already exist. God how people underestimate C++ all the time.
Of course you can use a unit type that handles conversions AND mathematical operations. Feet to meter cubed and you get m³, and the library will throw a compile error if you try to assign it to anything it doesn't work with (liters would be fine, for example)
I know of about 7 different libraries, 5 of them private to my company (of which 4 are not in use). Every one takes a fundamentally different approach to the problem.
> Feet to meter cubed and you get m³, and the library will throw a compile error if you try to assign it to anything it doesn't work with (liters would be fine, for example)
Liters would not be fine if you are using standard floating point values since you lose precision moving decimal points in some cases. Maybe for your application the values are such that this doesn't matter, but without understanding your problem in depth you cannot make the generic statement.
I could write books (I won't but I could) on all the compromises and trade offs in building a unit type library.
As a more general rant - people who have maybe used 5% of the feature set of C++ come along and explain why language X is superior because it has feature Y and Z.
News flash, C++ has every conceivable feature, it's the reason why it is so unwieldy. But you can even plug in a fucking GC if you so desire. Let alone stuff like basic meta programming.
GC was removed from the C++ standard in C++23 because all the compilers were like "hell no" and it was an optional feature so they could get away with not adding it. So this optional feature never actually existed and they removed it in later standards.
The C++ standard has never included a garbage collector. It only provided mechanisms intended to facilitate the implementation of a GC, but they were useless.
There are ways to do GC without language support. They are harder, but have been around in various forms for decades. They have never caught on though.
Thankfully some folks already thought that out, one possible library,
https://mpusz.github.io/mp-units/latest/
I have seen several versions. I wrote two different ones myself - both not in use because the real world of units turned out far more complex. the multiplication thing is one simple example of the issuses but not a complete list
I guess I should have said `unique_type<uint32_t, "meter", std::ratio<1000, 1>>` then :) Then you can do the same as std::chrono::duration does.
Now write a book on the various tradeoffs from that decision. there is no perfect general answer, some domains have specific needs that are different. Depending on your domain that might be a good choice or it might be terrible
who said `operator*` needs to return the same type as its parameters?
operator* can return exactly one type. You can choose which, but metric offers many possible choices, and with floating point math on computers you will lose precision converting between them in some cases so you need to take care to get the right on for your users - which will not be the same for all users.
One return type, for any given combination of parameter types, not to mention the possibility of templating to manipulate the return type….
See, more trade offs...
honestly, I’m not seeing the problem you’re seeing
C++ really needs something like `using explicit kilometer = uint32_t;`
The 'explicit' would force you to use static_cast
There's several libraries, including some supporting units and mathematical operations yielding the correct result types.
And as usual, it mostly comes with zero overhead, beyond optional runtime range checking and unit conversions.
But C++ is a meta-programming language. Making up your own types with full operator overloading and implicit and explicit conversions is rather easy.
And the ADA feature of automatically selecting a suitable type under the hood isn't actually that useful, since computers don't really handle that many basic types on a hardware level. (And just to be clear, C++ templates can do the same either way)
But do these libraries allow using values in aggregates (i.e. structs that can be initialized by listing members in {} )? While preventing endianness errors
Aside from technical factors, there are social factors involved. For example, both Python and C++ has operator overloading. But in C++ that's horrible and you run screaming from it, while in Python land it's perfectly fine. What is the difference? Culture and taste.
It isn't the same operator overloading.
In C++ operator overloading can easily mess with fundamental mechanisms, often intentionally; in Python it is usually no more dangerous than defining regular functions and usually employed purposefully for types that form nice algebraic structures.
I hardly see the difference, given the capabilities of operator overloading in Python,
This type confusion would have been identical with a plain function, __add__ is only syntactic sugar:
Compare with, for example, fouling the state of output streams from operator>> in C++.Hardly any different, trying to pretend Python is somehow better.
Operator overload is indeed syntactic sugar for function calls, regardless of the language.
By the way, you can overload >> in Python via rshift(), or __rshift__() methods.
Of course I can overload >> in Python, but I cannot foul up output stream state because it doesn't exist. Formally there is little difference between C++ and Python operator overloading and both languages have good syntax for it, but C++ has many rough edges in the standard library and intrinsic complications that can make operator overloading much more interesting in practice. For instance, overload resolution is rarely trivial.
It is only one pip install away, if anyone bothers to make on such set of overloads.
People don't though. That's the big difference. There's a certain taste in the Python community.
It's the exact same thing except in Python the community largely has taste. In C++ `cout >> "foo"` exists in the standard library.
I love how among a certain set the word "taste" has become an all-purpose substitute for having an argument or making a case. It basically means "I have more social media follows than you do, so I'm right"
I believe the C++ community as a whole are quite convinced that overloading >> for stdout was a mistake.
> Culture and taste.
You mean accumulated prejudices, myths, and superstitions that most in any given community (programming language related or not) won't challenge for fear of being cast out of the group for heresy.
Err... no I mean the good taste not to overload >> for console output. There's no fear of being cast out, don't be silly.