JSON serialization is already quite fast IMO so this is quite good. I think last time I compared JSON serialization to Protocol Buffers, JSON was just a little bit slower for typical scenarios but not materially so. These kinds of optimizations can shift the balance in terms of performance.
JSON is a great minimalist format which is both human and machine readable. I never quite understood the popularity of ProtoBuf; the binary format is a major sacrifice to readability. I get that some people appreciate the type validation but it adds a lot of complexity and friction to the transport protocol layer.
For me the appeal of protobuf is the wire-format forward-backward compatibility.
It's hard enough to not break logical compatibility, so I appreciate not having to think too hard about wire compat. You can of course solve the same thing with JSON, but, well, YOU have to solve it.
(Also worth noting, there are a lot of things I don't like about the grpc ecosystem so I don't actually use it that much. But this is one of the pieces I really like a lot).
Arguably JSON doesn't have this problem at all since it encodes the field names too. The only thing it doesn't handle is field renames, but I mean, come on, you know you can't rename a field in public API anyways :)
I appreciate this comment.
It does seem like some technologies get credit for solving problems that they created.
> machine readable
It is readable but it's not a good/fast format. IEEE754<->string is just expensive even w/ all the shortcuts and improvements. byte[]s have no good way to be presented either.
I imagine compression of JSON also adds significant overhead compared to ProtoBuf on top of extra memory usage.
I don't disagree that people go for ProtoBuf a bit too eagerly though.
A format cannot be both human and machine readable. JSON is human readable, that's the point of it. Human readability is great for debugging only but it has an overhead because it's not machine friendly. Protobuf messages are both smaller and quicker to decode. If you're in an environment where you're handling millions of messages per second binary formats pay dividends. The number of messages viewed by a human is miniscule so there's no real gain to having that slow path. Just write a message dump tool.
Ah well with LLMs, the definition of 'machine-readable' has changed quite a bit.