Model routing is deceptively hard though. It has halting problem characteristics: often only the smartest model is smart enough to accurately determine a task's difficulty. And if you need the smartest model to reliably classify the prompt, it's cheaper to just let it handle the prompt directly.
This is why model pickers persist despite no one liking them.
Yes but prompt evaluation is far faster than inference as it can be done (mostly) in parallel, so I don't think that's true.
The problem is that input token cost dominates output token cost for the majority of tasks.
Once you've given the model your prompt and are reading the first output token for classification, you've already paid most of the cost of just prompting it directly.
That said, there could definitely be exceptions for short prompts where output costs dominate input costs. But these aren't usually the interesting use cases.
No, you're talking about costs to user, which are oversimplifications of the costs that providers bear. One output token with a million input tokens is incredibly cheap for providers
> One output token with a million input tokens is incredibly cheap for providers
Source? Afaik this is incorrect.
Chevk out any LLM API providers pricing. Output tokens are always significantly more expensive than input (which can also be cached).
Input tokens usually dominate output tokens by a lot more than 2x though. It’s often 10x or more input. It can even easily be 100x or more. Again in realistic workflows.
Caching does help the situation, but you always at least pay the initial cache write. And prompts need to be structured carefully to be cacheable. It’s not a free lunch.
That's usually not the case for thinking models. And usually hard problems have a very short prompt.
For me personally (using mostly for coding and project planning) it's nearly always the case, including with thinking models. I'm usually pasting in a bunch of files, screenshots, etc., and having long conversations. Input nearly always heavily dominates output.
I don't disagree that there are hard problems which use short prompts, like math homework problems etc., but they mostly aren't what I would categorize as "real work". But of course I can only speak to my own experience /shrug.
Yeah coding is definitely a situation where context is usually very very large. But at the same time in those situations something like Sonnet is fine.
but if the less strong model has low false positives you can just route them in order of strength
That's a very big "if".