The key point of the article is "your data is trapped inside your program", i.e. data models can't generally be shared between programs. One thing that has improved my life has been using apache arrow as a way to decrease the friction of sharing data between different executables. With arrow (and it's file based compressed cousin parquet), the idea is that once data is produced it never needs to be deserialized again as you would with json or avro.
Data and data models are not the same.
Sharing data is just totally undefined for the overwhelming majority of all data in the world, because there just isn't any standard for the format the data should be in.
Data models are even harder, because whereas data is produced by the world, and data formats are produced to intentionally be somewhat generalized, data models are generally produced in the context of a piece of software.
How are you handling data update? Last I checked, Arrow and similar systems had extremely poor performance if you needed to mutate data at even modest rates.
you create an output arrow table and populate it with rows. but w/r/t the original idea, arrow data always comes with a schema and is efficient and compact, so it makes it easier to share data between different programs.