I've seen some systems with a lot of pieces where teams have attempted to avoid repetition and arranged to use a single source of schema truth to generate various other parts automatically, and it was generally more brittle and harder to maintain due to different parts of the pipeline owned by different teams, and operated on different schedules. Furthermore it became hard to onboard to these environments and figure out how to make changes and deploy them safely. Sometimes the repetition is really the lesser evil.

> I've seen some systems with a lot of pieces where teams have attempted to avoid repetition and arranged to use a single source of schema truth to generate various other parts automatically, and it was generally more brittle and harder to maintain due to different parts of the pipeline owned by different teams, and operated on different schedules.

I'm not sure what would lead to this setup. For years there are frameworks that support generating their own OpenAPI spec, and even API gateways that not only take that OpenAPI spec as input for their routing configuration but also support exporting it's own.

I see, it's also reminiscent of the saying "microservices" are an organisational solution. It's just that I also see a lot of churn and friction due to incoherent versions and specs not being managed in sync now (some solutions exists are coming though)

> it was generally more brittle and harder to maintain

It depends on the system in question, sometimes it's really worth it. Such setups are brittle by design, otherwise you get teams that ship fast but produce bugs that surface randomly in the runtime.

Absolutely, it can work well when there is a team devoted to the schema registry and helping with adoption. But it needs to be worth it to be able to amortize the resources, so probably best for bigger organizations.