True, but usually you only need that if your data is so large it can't fit in memory and in that case you shouldn't be using JSON anyway. (I was in this situation once where our JSON files grew to gigabytes and we switched to SQLite which worked extremely well.)
Actually, you'll hit the limits of DOM-style JSON parsers as soon as your data is larger than about half the available memory, since you'd most likely want to build your own model objects from the JSON, so at some point both of them must be present in memory (unless you're able to incrementally destroy those parts of the DOM that you're done with).
Anyhow, IMO a proper JSON library should offer both, in a layered approach. That is, a lower level SAX-style parser, on top of which a DOM-style API is provided as a convenience.
> since you'd most likely want to build your own model objects from the JSON, so at some point both of them must be present in memory
Not really because the JSON library itself can stream the input. For example if you use `serde_json::from_reader()` it won't load the whole file into memory before parsing it into your objects:
https://docs.rs/serde_json/latest/serde_json/fn.from_reader....
But that's kind of academic; half of all memory and all memory are in the same league.
That's only true if your model objects are serde structs, which is not desirable for a variety of reasons, most importantly because you don't want to tie your models to a particular on-disk format.
In the vast majority of cases you can and should just load directly into Serde structs and use those directly. That's kind of the point.
In some minority of cases you might not want to do that (e.g. because you need to support multiple versions of a format), but that is rare and can also be handled in various ways directly in Serde.