Curious to learn what a “product benchmark” looks like. Is it evals you use to test prompts/models? A third party tool?
Examples from the wild are a great learning tool, anything you’re able to share is appreciated.
Curious to learn what a “product benchmark” looks like. Is it evals you use to test prompts/models? A third party tool?
Examples from the wild are a great learning tool, anything you’re able to share is appreciated.
It's an internal benchmark that I use to test prompts, models and prompt-tunes, nothing but a dashboard calling our internal endpoints and showing the data, basically going through the prod flow.
For my product, I run a video through a multimodal LLM with multiple steps, combine data and spit out the outputs + score for the video.
I have a dataset of videos that I manually marked for my usecase, so when a new model drops, I run it + the last few best benchmarked models through the process, and check multiple things:
- Diff between outputed score and the manual one - Processing time for each step - Input/Output tokens - Request time for each step - Price of request
And the classic stats of average score delta, average time, p50, p90 etc. + One fun thing which is finding the edge cases, since even if the average score delta is low (means its spot-on), there are usually some videos where the abs delta is higher, so these usually indicate niche edge cases the model might have.
Gemini 3 Flash nails it sometimes even better than the Pro version, with nearly the same times as 2.5 Pro does on that usecase. Actually, pushed it to prod yesterday and looking at the data, it seems it's 5 seconds faster than Pro on average, with my cost-per-user going down from 20 cents to 12 cents.
IMO it's pretty rudimentary, so let me know if there's anything else I can explain.
Everyone should have their own "pelican riding a bicycle" benchmark they test new models on.
And it shouldn't be shared publicly so that the models won't learn about it accidentally :)
I am asking the models to generate an image where fictional characters play chess or Texas Holdem. None of them can make a realistic chess position or poker game. Always something is off like too many pawns or too may cards, or some cards being ace-up when they shouldn't be.
Any suggestions for a simple tool to set up your own local evals?
Just ask LLM to write one on top of OpenRouter, AI SDK and Bun To take your .md input file and save outputs as md files (or whatever you need) Take https://github.com/T3-Content/auto-draftify as example
My "tool" is just prompts saved in a text file that I feed to new models by hand. I haven't built a bespoke framework on top of it.
...yet. Crap, do I need to now? =)
Yeah I’ve wondered about the same myself… My evals are also a pile of text snippets, as are some of my workflows. Thought I’d have a look to see what’s out there and found Promptfoo and Inspect AI. Haven’t tried either but will for my next round of evals
Well you need to stop them from getting incorporated into its training data
_Brain backlog project #77 created_