@seanw265 Yes, that's a problem. This can be solved for open-source models by running them on my own, but again the TPS will be dependent on the hardware used.
All models are tested through OpenRouter. The providers on OpenRouter vary drastically in quality, to the point where some simply serve broken models.
That being said, I usually test models a few hours after release, at which point, the only provider is the "official" one (e.g. Deepseek for their models, Alibaba for their own, etc.).
I don't really have any good solution for testing model reliability for closed-source models, BUT the outcome still holds: a model/provider that is more reliable, is statistically more likely to also give better results during at any given time.
A solution would be to regularly test models (e.g. every week), but I don't have the budget for that, as this is a hobby project for now.
If you don't have the budget to test regularly, then including this kind of metric is questionable. You've essentially sampled the infrastructure's reliability at only a few points, which doesn't provide a very meaningful signal. It could mislead future readers about the performance of the overall system (either for the better or the worse).
I'd personally just try to test the model on the model's merits, not the infrastructure. The infrastructure is a constantly changing variable. Many infrastructure failures can be worked around by simply re-submitting the failed request automatically.