I’m a systems person too, and I don’t see mediocrity as inevitable.

The slop problem isn’t just model quality. It’s incentives and decision making at inference time. That’s why I’m working on an open source tool for governance and validation during inference, rather than trying to solve everything in pre training.

Better systems can produce better outcomes, even with the same models.

I agree completely, and I’m doing the same thing. Good tools that help produce better outcomes will have a multiplicative impact as models improve.

What are you building?

The open source system I’m working on lets multiple agents propose, critique, and stake on decisions before a single action is taken.

It runs at inference time rather than training time and is model agnostic. The goal is to make disagreement explicit and costly instead of implicit and ignored, especially in high stakes or autonomous workflows.