It really feels like we’re solving the wrong problem sometimes. If a bad type can crash your application, sure, type safety is one answer but I have to admit I like the erlang approach; if something unexpected happens crash the process (not os process, erlang process) which has a very small blast radius on a well architected system (maybe doesn’t even fail the individual request that caused it). I wish more languages had this let it crash philosophy, it really allows for writing code exclusively for the happy path, safe in the knowledge that a -1 where a “string” should be isn’t going to take down production.
Somehow, it feels like a better solution than these complicated type systems. Does any other language do this outside BEAM?
When working on large, important software, crashing is not the worst thing that can happen; corrupting user data and/or allowing unauthorized access is.
The point of using the type system to do something like distinguish between sanitized and unsanitized strings is specifically to prevent these kinds of security breaches.
Erlang was designed for traditional telecom, where reliability of connections was the biggest factor, not security. I fail to see how Erlang’s approach can deal with the issue of security breaches or corrupted user data.
Or have a static type system and something like BEAM. I'm not sure why this is a one or other approach, both are useful and unfortunately it doesn't seem like any languages include both. Gleam exists but doesn't really integrate with BEAM, it seems to have its own way of doing things that are more akin to Haskell, given its origins.
I thought gleam was fully integrated with otp? You’re telling me you can’t do a gen_server or a supervisor in gleam?
In a way I agree with you, and I'm not sure that what popular languages embrace or make it easy to follow this philosophy. My sense is that Erlang is still the leader.
But I did want to add something the article also touches on: types can be not only about ensuring safety or correctness at runtime, but also about representing knowledge by encoding the theory of how the code is supposed to work as far as is practical, in a way that is durable as contributors come and go from a codebase.
Admittedly this can come at the cost of making it slower to experiment on or evolve the code, so you have to think about how strongly you want to enforce something to avoid the rigidity being more painful than valuable. But it's generally a win for helping someone new to a codebase understand it before they change it.
Edit: another thought I had is that type mistakes do not always causes crashes. Silent corruption can be much more insidious, e.g. from confusing types which mean something different but are the same at the primitive level (e.g. a string, number or uuid)
>if something unexpected happens crash the process
There are some expectations where that's a reasonable response to a violation, but there are many expectations where the violation implies a bug elsewhere and crashing the process will do nothing to address that that wouldn’t have been better accomplished with stronger compile time checking.
For me this has been a life saver being the only back end developer at the company. I don’t have the energy nor time to think about every possible scenario, especially not the mobile client sending random strings to something that should be parsed as an uuid (has happened more than once). By letting it crash I can have a look at the traces at my own leisure and a lot of them I never fix, because I don’t have to.
The amount of silencing (implementer error, but quite prevalent) of errors I’ve seen in typescript codebases are horrifying. Essentially ”try happy path, catch everything else and return generic error”, the result is is mostly the same for the user, but night and day for me who is trying to fix it.