The fact that they're raising the price for the mini models by 166% is pretty notable.
gpt-4o-mini for comparison:
- Input: $0.15
- Cached Input $0.075
- Output: $0.60
The fact that they're raising the price for the mini models by 166% is pretty notable.
gpt-4o-mini for comparison:
- Input: $0.15
- Cached Input $0.075
- Output: $0.60
That's what I was thinking. I hoped to see a price drop, but this does not change anything for my use cases.
I was using gpt-4o-mini with batch API, which I recently replaced with mistral-small-latest batch API, which costs $0.10/$0.30 (or $0.05/$0.15 when using the batch API). I may change to 4.1-nano, but I'd have to be overwhelmed by its performance in comparision to mistral.
I don't think they ever committed themselves to uniformed pricing for mini models. Of course cheaper is better but I understand pricing to be contingent on factors specific to every next model rather than following from a blanket policy.
Seems like 4.1 nano ($0.10) is closer to the replacement and 4.1 mini is a new in-between price