I spent some time in the actual compiler source. There's real work here, genuinely good ideas.

The best thing Skir does is strict generated constructors. You add a field, every construction site lights up. Protobuf's "silently default everything" model has caused mass production incidents at real companies. This is a legitimately better default.

Dense JSON is interesting but the docs gloss over the tradeoff: your serialized data is [3, 4, "P"]. If you ever lose your schema, or a human needs to read a payload in a log, you're staring at unlabeled arrays. Protobuf binary has the same problem but nobody markets binary as "easy to inspect with standard tools." The "serialize now, deserialize in 100 years" claim has a real asterisk. Compatibility checking requires you to opt into stable record IDs and maintain snapshots. If you skip that (and the docs' own examples often do), the CLI literally warns you: "breaking changes cannot be detected." So it's less "built-in safety" and more "safety available if you follow the discipline." Which is... also what Protobuf offers.

The Rust-style enum unification is genuinely cleaner than Protobuf's enum/oneof split. No notes there, that's just better language design.

Minor thing that bothered me disproportionately: the constant syntax in the docs (x = 600) doesn't match what the parser actually accepts (x: 600).

The weirdest thing that bugged the heck out of me was the tagline, "like protos but better", that's doing the project no favors.

I think this would land better if it were positioned as "Protobuf, but fresh" rather than "Protobuf, but better." The interesting conversation is which opinions are right, not whether one tool is universally superior.

Quite frankly, I don't use protobuf because it seems like an unapproachable monolith, and I'm not at FAANG anymore, just a solo dev. No one's gonna complain if I don't. But I do love the idea of something simpler thats easy to wrap my mind around.

That's why "but fresh" hits nice to me, and I have a feeling it might be more appealing than you'd think - ex. it's hard to believe a 2 month old project is strictly better than whatever mess and history protobufs gone through with tons of engineers paid to use and work on it. It is easy to believe it covers 99% of what Protobuf does already, and any crazy edge cases that pop up (they always do, eventually :), will be easy to understand and fix.

Thank you so much for taking the time to dig into the compile source code and the thorough comment you left.

For dense JSON: the idea is that it is often a good "default" choice because it offers a good tradeoff across 3 properties: efficiency (where it's between binary and readable JSON), persistability (safe to evolve shema without losing backward compatibility), and readability (it's low for the reasons you mentioned, but it's not as bad as a binary string). I tried to explain this tradeoff in this table: https://skir.build/docs/serialization#serialization-formats

I hear your point about the tagline "like protos but better" which I hesitated to put because it sounds presumptuous. But I am not quite sure what idea you mean to convey by "fresh"?

Not the parent but I infer “fresh” as meaning a new approach to an old problem (with the benefits of experience baked in). A synonym of “modern” without the baggage.

Fair. I changed the tagline on the website to "A modern alternative to Protocol Buffer". Thanks for the feedback.

100%, danke

Cheers :) (other replier was right on "fresh", "fresh" definitely wasn't right)

Also, thank you for flagging the constant syntax problem (x = 600) on the website. Fixed.

> Minor thing that bothered me disproportionately: the constant syntax in the docs (x = 600) doesn't match what the parser actually accepts (x: 600).

You’re a better man than me. If the docs can’t even get the syntax right, that’s a hard no from me.

Also, fwiw, you’ve got a few points wrong about protos. Inspecting the binary data is hard, but the tag numbers are present. You need the schema, but at least you can identify each element.

Also, I disagree on the constructor front. Proto forces you to grapple with the reality that a field may be missing. In a production system, when adding a new field, there will be a point where that field isn’t present on only one side of the network call. The compiler isn’t saving you.

Fresh is more honest than better, and personally, I wouldn’t change it.

> Also, I disagree on the constructor front. Proto forces you to grapple with the reality that a field may be missing. In a production system, when adding a new field, there will be a point where that field isn’t present on only one side of the network call. The compiler isn’t saving you.

I agree it's important for users to understand that newer fields won't be set when they deserialize old data -- whether that's with Protobuf or Skir. I disagree with the idea that not forcing you to update all constructor call sites when you add a field will help (significantly) with that. Are you saying that because Protobuf forces you to manually search for all call sites when you add a field, it forces you to think about what happens if the field is not set at deserialization, hence, it's a good thing? I'm not sure that outweighs the cost of bugs introduced by cases where you forget to update a constructor call site when you add a field to your schema.

Respectfully, I’ve never forgotten a call site, but also yes. In a hypothetical HelloWorld service, the HelloRequest and HelloResponse generally aren’t used anywhere except a rpc caller and rpc handler, so it’s not hard to “remember” and find the usage.

Some callers may not need to update right away, or don’t need the new feature at all, and breaking the existing callers compilation is bad. If your caller is a different team, for example, and their CICD breaks because you added a field, that’s bad. Each place it’s used, you should think about how it’ll be handled, BUT ALSO, your system explicitly should gracefully handle the case where it’s not uniformly present. It’s an explicit goal of protos to support the use case where heterogeneous schema versions are used over the wire.

If a bug is introduced because the caller and handler use different versions, the compiler wasn’t going to save you anyways. That bug would have shown up when you deploy or update the client and server anyways - unless you atomically update both at once. You generally cannot guarantee that a client won’t use an outdated version of the schema, and if things break because of that, you didn’t guard it correctly. That’s a business logic failure not a compilation failure.