It shouldn't matter.
Decompress's Reader shouldn't depend on the size of the buffer of the writer passed in to its "stream" implementation.
So that's a bug in the Decompress Reader implementation.
The article confuses a bug in a specific Reader implementation with a problem with the Writer interface generally.
(If a reader really wants to impose some chunking limitation for some reason, then it should return an error in the invalid case, not go into an infinite loop.)
So how would the Decompress Reader be implemented correctly? Should it use its own buffer that is guaranteed to be large enough? If so, how would it allocate that buffer?
At the very least, the API of the Readera and Writers lends itself to implementations that have this kind of bug, where they depend on the buffer being a certain size.
> So how would the Decompress Reader be implemented correctly? Should it use its own buffer that is guaranteed to be large enough?
yes
> If so, how would it allocate that buffer?
as it sees fit. or, it can offer mechanisms for the caller to provide pre-allocated buffer(s). in any case the point is that this detail can't be the responsibility of the producer to satisfy, unless that requirement is specified somehow explicitly
in general the new zig io interfaces conflate behavior (read/write) with implementation (buffer size(s))
I ... don't disagree with you. Thanks. It helps my understanding.
I know this is moving the goalpost, but it's still a shame that it [obviously] has to be a runtime error. Practically speaking, I still think it leaves lot of friction and edge cases. But what you say makes sense: it doesn't have to be unsafe.
Makes me curious why they asserted instead of erroring in the first place (and I don't think that's exclusive to the zstd implementation right now).