Some are bit-by-bit (e.g. the PPM family of compressors[1]), but the normal input granularity for most compressors is a byte. (There are even specialized ones that work on e.g. 32 bits at a time.)
[1] Many of the context models in a typical PPM compressor will be byte-by-byte, so even that isn't fully clear-cut.
They output a bitstream, yeah but I don't know of anything general purpose which effectively consumes anything smaller than bytes (unless you count various specialized handlers in general-purpose compression algorithms, e.g. to deal with long lists of floats)
Some are bit-by-bit (e.g. the PPM family of compressors[1]), but the normal input granularity for most compressors is a byte. (There are even specialized ones that work on e.g. 32 bits at a time.)
[1] Many of the context models in a typical PPM compressor will be byte-by-byte, so even that isn't fully clear-cut.
A Zstd maintainer clarified this: https://news.ycombinator.com/item?id=45251544
> Ultimately, Zstd is a byte-oriented compressor that doesn't understand the semantics of the data it compresses
They output a bitstream, yeah but I don't know of anything general purpose which effectively consumes anything smaller than bytes (unless you count various specialized handlers in general-purpose compression algorithms, e.g. to deal with long lists of floats)