Pelican generated via OpenRouter: https://gist.github.com/simonw/cc4ca7815ae82562e89a9fdd99f07...
Solid bird, not a great bicycle frame.
Pelican generated via OpenRouter: https://gist.github.com/simonw/cc4ca7815ae82562e89a9fdd99f07...
Solid bird, not a great bicycle frame.
Thank you for continuing to maintain the only benchmarking system that matters!
Context for the unaware: https://simonwillison.net/tags/pelican-riding-a-bicycle/
They will start to max this benchmark as well at some point.
It's not a benchmark though, right? Because there's no control group or reference.
It's just an experiment on how different models interpret a vague prompt. "Generate an SVG of a pelican riding a bicycle" is loaded with ambiguity. It's practically designed to generate 'interesting' results because the prompt is not specific.
It also happens to be an example of the least practical way to engage with an LLM. It's no more capable of reading your mind than anyone or anything else.
I argue that, in the service of AI, there is a lot of flexibility being created around the scientific method.
For 2026 SOTA models I think that is fair.
For the last generation of models, and for today's flash/mini models, I think there is still a not-unreasonable binary question ("is this a pelican on a bicycle?") that you can answer by just looking at the result: https://simonwillison.net/2024/Oct/25/pelicans-on-a-bicycle/
So if it can generate exactly what you had in mind based presumably on the most subtle of cues like your personal quirks from a few sentences that could be _terrifying_, right?
It's interesting how some features, such as green grass, a blue sky, clouds, and the sun, are ubiquitous among all of these models' responses.
It is odd, yeah.
I'm guessing both humans and LLMs would tend to get the "vibe" from the pelican task, that they're essentially being asked to create something like a child's crayon drawing. And that "vibe" then brings with it associations with all the types of things children might normally include in a drawing.
If you were a pelican, wouldn't you want to go cycling on a sunny day?
Do electric pelicans dream of touching electric grass?
Now this is the test that matters, cheers Simon.
This Pelican benchmark has become irrelevant. SVG is already ubiquitous.
We need a new, authentic scenario.
Like identifying names of skateboard tricks from the description? https://skatebench.t3.gg/
I don’t care how practical it may or may not be, this is my new favorite LLM benchmark
I couldn't find an about page or similar?
Here's the public sample https://github.com/T3-Content/skatebench/blob/main/bench/tes...
I don't think there's a good description anywhere. https://youtube.com/@t3dotgg talks about it from time to time.
o3-pro is better than 5.2 pro! And GPT 5 high is best. Really quite interesting.
You can generate a few such sentences for more samples.
Alternatively, take the top ten F500 stock performers. Some easy signal that provides enough randomness but is easy to agree upon and doesn't provide enough time to game.
It's also something teams can pre-generate candidate problems for to attempt improvement across the board. But they won't have the exact questions on test day.
How many pelican riding bicycle SVGs were there before this test existed? What if the training data is being polluted with all these wonky results...
I'd argue that a models ability to ignore/manage/sift through the noise added to the training set from other LLMs increases in importance and value as time goes on.
You're correct. It's not as useful as it (ever?) was as a measure of performance...but it's fun and brings me joy.
The bird not having wings, but all of us calling it a 'solid bird' is one of the most telling examples of the AI expectations gap yet. We even see its own reasoning say it needs 'webbed feet' which are nowhere to be found in the image.
This pattern of considering 90% accuracy (like the level we've seemingly we've stalled out on for the MMLU and AIME) to be 'solved' is really concerning for me.
AGI has to be 100% right 100% of the time to be AGI and we aren't being tough enough on these systems in our evaluations. We're moving on to new and impressive tasks toward some imagined AGI goal without even trying to find out if we can make true Artificial Niche Intelligence.
It has a wing. Look at the code comments in the SVG!
MMLU performance caps out around 90% because there are tons of errors in the actual test set. There's a pretty solid post on it here: https://www.reddit.com/r/LocalLLaMA/comments/163x2wc/philip_...
As far as I can tell for AIME, pretty much every frontier model gets 100% https://llm-stats.com/benchmarks/aime-2025