This doesn't have API access yet, but OpenAI seem to approve of the Codex API backdoor used by OpenClaw these days... https://twitter.com/steipete/status/2046775849769148838 and https://twitter.com/romainhuet/status/2038699202834841962

And that backdoor API has GPT-5.5.

So here's a pelican: https://simonwillison.net/2026/Apr/23/gpt-5-5/#and-some-peli...

I used this new plugin for LLM: https://github.com/simonw/llm-openai-via-codex

UPDATE: I got a much better pelican by setting the reasoning effort to xhigh: https://gist.github.com/simonw/a6168e4165a258e4d664aeae8e602...

OpenAI hired the guy behind OpenClaw, so it makes sense that they’re more lenient towards its usage.

They basically bought OpenClaw right?

That pelican you posted yesterday from a local model looks nicer than this one.

Edit: this one has crossed legs lol

It really needs to pee.

Isn't it awful ? After 5.5 versions it still can't draw a basic bike frame. How is the front wheel supposed to turn sideways ?

I feel like if I attempted this, the bike frame would look fine and everything else would be completely unrecognizable. After all, a basic bike frame is just straight lines arranged in a fairly simple shape. It's really surprising that models find it so difficult, but they can make a pelican with panache.

> a fairly simple shape

Bike frames are very hard to draw unless you've already consciously internalized the basic shape, see https://www.booooooom.com/2016/05/09/bicycles-built-based-on...

Humans are also famously bad at drawing bicycles from memory https://www.gianlucagimini.it/portfolio-item/velocipedia/

[deleted]

why do you find it surprising? these models have no actual understanding of anything, never mind the physical properties and capabilities of a bicycle.

Sad to see this downvoted. So many people think that LLM have understanding?

My question is, as a human, how well would you or I do under the same conditions? Which is to say, I could do a much better job in inkscape with Google images to back me up, but if I was blindly shitting vectors into an XML file that I can't render to see the results of, I'm not even going to get the triangles for the frame to line up, so this pelican is very impressive!

Yeah, the bike frame is the thing I always look at first - it's still reasonably rare for a model to draw that correctly, although Qwen 3.6 and Gemini Pro 3.1 do that well now.

The distinction is that it's not drawing. It's generating an SVG document containing descriptors of the shapes.

Does OpenAI actually act open for once here, and allow using their model via a subscription over Anthrophic banning use in Openclaw?

The pelican doesn’t really matter anymore since models are tuned for it knowing people will ask.

They suck at tuning for it.

So pelican must have become the mandatory test case to pass for all model providers before launch.

I made pelicans at different thinking efforts:

https://hcker.news/pelican-low.svg

https://hcker.news/pelican-medium.svg

https://hcker.news/pelican-high.svg

https://hcker.news/pelican-xhigh.svg

Someone needs to make a pelican arena, I have no idea if these are considered good or not.

They are not good, and they seem to get worse as you increased effort. Weird

Yeah. I've always loosely correlated pelican quality with big model smell but I'm not picking that up here. I thought this was supposed to be spud? Weird indeed.

No but I can sense the movement, I think it's already reached the level of intelligence that draws it towards futurism or cubism /s

Can someone explain how we arrived at the pelican test? Was there some actual theory behind why it's difficult to produce? Or did someone just think it up, discover it was consistently difficult, and now we just all know it's a good test?

I set it up as a joke, to make fun of all of the other benchmarks. To my surprise it ended up being a surprisingly good measure of the quality of the model for other tasks (up to a certain point at least), though I've never seen a convincing argument as to why.

I gave a talk about it last year: https://simonwillison.net/2025/Jun/6/six-months-in-llms/

It should not be treated as a serious benchmark.

What it has going for it is human interpretability.

Anyone can look and decide if it’s a good picture or not. But the numeric benchmarks don’t tell you much if you aren’t already familiar with that benchmark and how it’s constructed.

It all began with a Microsoft researcher showing a unicorn drawn in tikz using GPT4. It was an example of something so outrageous that there was no way it existed in the training data. And that's back when models were not multimodal.

Nowadays I think it's pretty silly, because there's surely SVG drawing training data and some effort from the researchers put onto this task. It's not a showcase of emergent properties.

It's interesting to see some semblance of spatial reasoning emerge from systems based on textual tokens. Could be seen as a potential proxy for other desirable traits.

It's meta-interesting that few if any models actually seem to be training on it. Same with other stereotypical challenges like the car-wash question, which is still sometimes failed by high-end models.

If I ran an AI lab, I'd take it as a personal affront if my model emitted a malformed pelican or advised walking to a car wash. Heads would roll.

I tried getting it to generate openscad models, which seems much harder. Not had much joy yet with results.

G code and ascii art are also text formats, but seem to be beyond most if not all models.

(There are some that generate 3d models specifically, more in the image generation family than chatbot family.)

None of them have the pelican's feet placed properly on the pedals -- or the pedals are misrepresented. Cool art style but not physically accurate.

I'm not sure a physically accurate pelican would reach two pedals on a common bicycle. Maybe a model can solve that problem one day.

It's... like no pelican I've ever seen before.

You've never seen pelicans riding bicycles either so maybe these are just representations of those specific subgroups of pelicans which are capable of riding them. Normal pelicans would not feel the need to ride bikes since they can fly, these special pelicans mostly seem to lack the equipment needed to do that which might be part of the reason they evolved to ride two-wheeled pedal-propelled vehicles.

Is this direct API usage allowed by their terms? I remember Anthropic really not liking such usage.

That's amazing that the default did that much in just 39 "reasoning tokens" (no idea what a reasoning token is but that's still shockingly few tokens)

If you don't know what a reasoning token is, then how can 39 be considered shockingly few?

It's less than 67, duh.

Not during peak hours.

[deleted]

Hmm. Any idea why it's so much worse than the other ones you have posted lately? Even the open weight local models were much better, like the Qwen one you posted yesterday.

The xhigh one was better, but clearly OpenAI have not been focusing their training efforts on SVG illustrations of animals riding modes of transport!

[deleted]

It beats opus-4.7 but looks like open models actually have the lead here.

Thank you for doing all this. It's appreciated.

You do realise they are doing it for self promotion right?

I mean, yeah. "Person who spends time publishing content online is doing it for self promotion" doesn't seem particularly notable to me. 24 years of self promotion and counting!

I am always outraged when youtube creators ask me to like and subscribe. /s

Not the same at all. For that to happen you would have to explicitly visit their channel (forgive incorrect terminology, I don't use youtube). If someone kept posting on hackernews asking you to subscribe I hope you wouldn't appreciate it. swillison is spamming a communal public feed with self promotional comments about vibe coding, quite obviously because they, like the rest of us, are panicking about not having a career in a few years.

The more time I spend actually working with these tools the less I fear for my future career.

Building software remains really hard. Most people are not going to be able to produce production quality software systems, no matter how good the AI tooling gets.

Dude it comes across, maybe only to me, as a bit shameless. Or maybe it's just that there are so many people lapping it up like you're doing a public service that I find tedious. I wish hackernews had a block feature but alas it doesn't. Maybe I'll vibecode a browser extension.

what is your setup for drawing pelican? Do you ask model to check generated image, find issues and iterate over it which would demonstrate models real abilities?

It's generally one-shot-only - whatever comes out the first time is what I go with.

I've been contemplating a more fair version where each model gets 3-5 attempts and then can select which rendered image is "best".

Try llm-consortium with --judging-method rank

I think it will make results way better and more representative of model abilities..

It would... but the test is inherently silly, so I'm still not sure if it's worth me investing that extra effort in it.

I for one delight in bicycles where neither wheel can turn!

It continues to amaze me that these models that definitely know what bicycle geometry actually looks like somewhere in their weights produces such implausibly bad geometry.

Also mildly interesting, and generally consistent with my experience with LLMs, that it produced the same obvious geometry issue both times.

> It continues to amaze me that these models that definitely know what bicycle geometry actually looks like somewhere in their weights produces such implausibly bad geometry.

I feel like the main problem for the models is that they can't actually look at the visual output produced by their SVG and iterate. I'm almost willing to bet that if they could, they'd absolutely nail it at this point.

Imagine designing an SVG yourself without being able to ever look outside the XML editor!

> Imagine designing an SVG yourself without being able to ever look outside the XML editor!

I honestly think I could do much better on the bicycle without looking at the output (with some assistance for SVG syntax which I definitely don't know), just as someone who rides them and generally knows what the parts are.

I'd do worse at the pelicans though.

Thank you for continuing to post these! Very interesting benchmark.

Wait, I thought we were onto racoons on e-scooters to avoid (some of) the issues with Goodhart's Law coming into play.

I fall back to possums on e-scooters if the pelican looks too good to be true. These aren't good enough for me to suspect any fowl play.

Exciting. Another Pelican post.

See if you can spot what's interesting and unique about this one. I've been trying to put more than just a pelican in there, partly as a nod to people who are getting bored of them.

It's silly and a joke and a surprisingly good benchmark and don't take it seriously but don't take not taking it seriously seriously and if it's too good we use another prompt and there's obvious ways to better it and it's not worth doing because it's not serious and if you say anything at all about the thread it's off-topic so you're doing exactly what you're complaining about and it's a personal attack from the fun police.

Only coherent move at this point: hit the minus button immediately. There's never anything about the model in the thread other than simon's post.

You know they are 1000% training these models to draw pelicans, this hasn't been a valid benchmark for 6 months +

OpenAI must be very bad at training models to draw pelicans (and bicycles) then.

Skeptism is out of control these days, any time an LLM does something cool it must have been cheating.

At some point, OpenAI is going to cheat and hardcode a pelican on a bicycle into the model. 3D modelling has Suzanne and the teapot; LLMs will have the pelican.