You also see on-the-wire protocols making invalid states unrepresentable through clever tricks. Consider RFC 3550 p.19:

    0                   1                   2                   3
    0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
   +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
   |      defined by profile       |           length              |
   +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
   |                        header extension                       |
   |                             ....                              |

   [...] The header extension contains a 16-bit length field that
   counts the number of 32-bit words in the extension, excluding the
   four-octet extension header (therefore zero is a valid length).  

So for RTP extension headers, actual_num_bytes = (`length` + 1) * 4. A naive `length` field might indicate the number of bytes. But this would allow a header to indicate "zero" bytes (not possible.) So the '0' value is defined as 0 more than we already have, and since packets are supposed to only contain a multiple of 4 bytes, the units of the length field are defined as 32-bits.

While it isn't strictly harmful, one drawback of this approach is that if you perfectly bit-pack every field such that random noise can be interpreted as a well-formed packet in your protocol, it will be difficult to heuristically identify the protocol.

Another corollary is that, if you're designing a data compression format, you try to avoid having sequences in your compressed data which are invalid, or which a compressor would never emit. Either one is probably a waste of bits which you could use to make your representation more concise.

(Intentional redundancies like checksums are fine, of course.)

> difficult to heuristically identify the protocol

This can as well be a desirable feature, depending on your design goals.