So let's put things we're interested in in the benchmarks.

I'm not against pelicans!

I think the reason the pelican example is great is because it's bizarre enough that it's unlikely that to appear in the training as one unified picture.

If we picked something more common, like say, a hot dog with toppings, then the training contamination is much harder to control.

I think it's now part of their training though, thanks to Simon constantly testing every new model against it, and sharing his results publicly.

There's a specific term for this in education and applied linguistics: the washback effect.

It's the most common SVG test, it's the equivalent of Will Smith eating spaghettis, so obviously they benchmax toward it