My peers and I work on a language centered around "constructive data modeling" (first time I hear it called that). We implement integers, and indeed, things like non empty lists using algebraic data types, for example. You can both have a theory of values that doesn't rely on trapdoors like "int32" or "string", as well as encode invariants, as this article covers.
As I understand it, the primary purpose of newtypes is actually just to work around typeclass issues like in the examples mentioned at the end of the article. They are specifically designed to be zero cost, because you want to not pay when you work around the type class instance already being taken for the type you want to make an instance for. When you make an abstract data type by not exporting the data constructors, that can be done with or without newtype.
The alternative to newtypes is probably to go the same route as OCaml and have people explicitly bring their own instances for typeclasses, instead of allowing each type only one instance?
I think OCaml calls these things modules or so. But the concepts are similar. For most cases, when there's one obvious instance that you want, having Haskell pick the instance is less of a hassle.
Yes, I may have accidentally spilled, but I prefer ML modules to typeclasses.
I like ML-style modules in principle, but in practice I like the ergonomics of typeclasses.
But while I did nearly half of my career in either OCaml or Haskell, I did all of my OCaml programming and most of my Haskell programming before the recent surge of really good autocompletion systems / AI; and I notice how much they help with Rust.
So the ergonomics of ML-style modules might be perfectly acceptable now, when you have an eager assistant filling in the busy work for obvious cases. Not sure.