So during my Nano Banana Pro experiments I wrote a very fun prompt that tests the ability for these image generation models to follow heuristics, but still requires domain knowledge and/or use of the search tool:
Create a 8x8 contiguous grid of the Pokémon whose National Pokédex numbers correspond to the first 64 prime numbers. Include a black border between the subimages.
You MUST obey ALL the FOLLOWING rules for these subimages:
- Add a label anchored to the top left corner of the subimage with the Pokémon's National Pokédex number.
- NEVER include a `#` in the label
- This text is left-justified, white color, and Menlo font typeface
- The label fill color is black
- If the Pokémon's National Pokédex number is 1 digit, display the Pokémon in a 8-bit style
- If the Pokémon's National Pokédex number is 2 digits, display the Pokémon in a charcoal drawing style
- If the Pokémon's National Pokédex number is 3 digits, display the Pokémon in a Ukiyo-e style
The NBP result is here, which got the numbers, corresponding Pokemon, and styles correct, with the main point of contention being that the style application is lazy and that the images may be plagiarized: https://cdn.bsky.app/img/feed_fullsize/plain/did:plc:oxaerni...Running that same prompt through gpt-2-image high gave an...interesting contrast: https://cdn.bsky.app/img/feed_fullsize/plain/did:plc:oxaerni...
It did more inventive styles for the images that appear to be original, but:
- The style logic is by row, not raw numbers and are therefore wrong
- Several of the Pokemon are flat-out wrong
- Number font is wrong
- Bottom isn't square for some reason
Odd results.
Prompts like this feel like it's using the wrong abstraction. The "obvious" thing to do with something like this would be to generate some code that generates the image and then run that code.
Inspired by this, I tried something much simpler. I asked it to draw 12 concentric circles. With three tries it always drew 10 instead. https://chatgpt.com/share/69e87d08-5a14-83eb-9a3b-3a8eb14692...
I think prompts like this are where agentic workflows come in to play. If you asked it to do generate the first 64 prime numbers, AI tools could do that. If you asked it to draw a charcoal image of Pokemon 13, it could do that. If you asked it to add a white Menlo 13 on a black background to the top left corner of that image, it could do that. If you asked it to do that 63 more times, it could do those things, and if you asked it to assemble those into a grid, it could.
It can't get that in a one-shot. Perhaps, though, it could figure out when it needs to break a problem into individual tasks to delegate to itself and assemble them at the end.
That's what makes it a fair evaluation of its limits
I mean asking these transformers to do maths has always been the wrong task. It's like we're now considering "it doesn't have x tools built with traditional code built in".
Though I suppose we're testing their model + agent harness here as well. It really _should_ have all of those tools/reasoning available to accomplish a task like the above without issue.
How is it that a model can produce what must be near 1:1 images ripped straight out of Pokemon Fire Red (The first ones) for profit and not be infringing copyright.
I know that's the game, but it seems CRAZY to me that they can do this.
Training a model on a corpus which includes copyrighted images but which is not focussed primarily or exclusively on applications which violate copyright might be fair use in the US (so far, it seems that way.)
But that doesn't mean that producing outputs using the model so trained which are based on copyright-protected ones in ways which would violate copyright if produced by any other means doesn't still violate copyright. DMCA safe harbor might apply to the system owner (IIRC, the exact boundaries are fuzzy with UGC generated on the site by the provider’s systems rather than generated elsewhere and posted), so OpenAI may not be liable for the infringement, but it's still an infringement.
The funny this is the main complaint I’ve heard so far is that it repeatedly refused to operate on original content… because it might violate copyright.
Yeah, the CSAM generated by grok proves the guardrails are only really good for stymieing benign uses.
It can’t. It violates copyright. The big players are the only ones with the money to pursue these things, but they’re interested in replacing artists with AI trained on their models so they settle and set up some sort of agreement. The little guys have no presidential case law to help them along, and nowhere close to the resources to push it that far, so they get steamrolled. I know artists famous enough for people— even commercial entities — to regularly blatantly rip them off by name with “in the style of” prompts, but there’s no realistic path to pursue it. Fame doesn’t pay legal bills.
Gemini uses google search to find references when making images, so it probably found the pokemon images online to do this.
> I know that's the game, but it seems CRAZY to me that they can do this.
Its not crazy that a search can find existing pokemon images. Maybe google should show which images it used as references to be more transparent here.
This is an amazing test and it's kinda' funny how terrible gpt-2-image is. I'd take "plagiarized" images (e.g. Google search & copy-paste) any day over how awful the OpenAI result is. Doesn't even seem like they have a sanity checker/post-processing "did I follow the instructions correctly?" step, because the digit-style constraint violation should be easily caught. It's also expensive as shit to just get an image that's essentially unusable.
This is from Gemini - https://lens.usercontent.google.com/banana?agsi=CmdnbG9iYWw6...
Did it correctly follow the instructions? Don't know my pokemon well enough.
Plusul and Minun sit next to each other in the Pokedex, 311 and 312. There's two 307s.
Essentially yes (bottom got distorted), but Gemini uses Nano Banana Pro or Nano Banana 2 so it's not a surprising result. The image I linked uses the raw API.
Note that the styles are different; there are two digit images rendered in color.
Color charcoal drawings do exist, but it’s not what’s usually meant by “charcoal drawing”.
that is interesting cause I feel gpt-image-1 did have that feature.
(source: https://chatgpt.com/share/69e83569-b334-8320-9fbf-01404d18df...)
You are comparing ChatGPT to a raw image model. These are two completely different things. ChatGPT takes your input, modifies the prompt and then passes it to the image model and then will maybe read the image and provide output. The image model like through the API just takes the prompt verbatim and generates an image.
Nano Banana Pro and ChatGPT Images 2.0 also tweak the prompt because they can think.
Yes exactly, "ChatGPT Images 2.0" is in ChatGPT. That is not a model.
I wouldn’t say it’s terrible. I wouldn’t say it’s a huge step forward in terms of quality compared to what I’ve seen before from AI
For what it's worth, NBP made some mistakes too.
Artistic oddities aside (why are the 8-bit sprites 16-bit, why do the charcoal drawings have colour, why does the art of specifically the Gen 1 Pokemon look so off.), 271 is Lombre, not Lotad.
Why would you consider this a good prompt?
Because both Nano Banana Pro and ChatGPT Images 2.0 have touted strong reasoning capabilities, and this particular prompt has more objective, easy-to-validate criteria as opposed to the subjective nature of images.
I have more subjective prompts to test reasoning but they're your-mileage-may-vary (however, gpt-2-image has surprisingly been doing much better on more objective criteria in my test cases)
[flagged]
"Quirky and obscure" has the functional benefit of ensuring the source question is not in the training data/outside the median user prompt, and therefore making the model less likely to cheat.
We have enough people complaining about Simon Willison's pelican test.
When you program, do you consider using your prior knowledge of programming cheating?
What would make the prompt a better actual evaluation in your judgement?
Not focusing on pokemon for a start. Maybe use something more people can recognize and evaluate. I have zero knowledge of pokemon, I see it as a niche thing for ultra-nerdy people, and not something everyone is familiar with. Nothing about that test can be evaluated by anyone but a pokemon expert. Sorry, but pokemon isn't as mainstream as some people might think it is.
I think you underestimate how popular Pokemon is.
By most objective measures it's the largest entertainment franchise in all of history.
Would you also object to any other pop-culture reference for the same reason?
still #opentowork huh
Where does one even use that hashtag?
It's a LinkedIn joke.
Ah yes, also known as C++ enjoyers.
banana Pro gets the logic and punts on the art; gpt-2-image gets the art and punts on the logic. Feels like instruction-following and creativity sit on opposite ends of the same slider.
This feels incredibly AI generated
The random accusations of AI generated comments are the most annoying part of the unfolding AI dystopia.
I do not think this is a good prompt or useful benchmark, but nonetheless, it seems to work better for me: https://chatgpt.com/share/69e88a94-ded8-8395-b5dc-abceb2f44d...
Huh, that is indeed better. If ChatGPT Images 2.0/gpt-2-image is more nondeterministic than usual, than that is in itself a useful data point.
Did you enable thinking for your experiment? Are you sure you were on the 2.0 rather than 1.5 version?
Just try a 23-sided plane convex polygon.
Neither of them drew them in an 8-bit style either. It's way too many colors.
They made the same mistake a lot of people do, 8-bit meaning Retro style. But they're from the 16bit(?) GBA games.
Maybe they're so advanced they learned to write to the palette registers mid-scanline.
Even a few months ago, ChatGPT/Sora's image generation performed better than Gemini/Nano Banana for certain weird prompts:
Try things like: "A white capybara with black spots, on a tricycle, with 7 tentacles instead of legs, each tentacle is a different color of the rainbow" (paraphrased, not the literal exact prompt I used)
Gemini just globbed a whole mass of tentacles without any regards to the count
[dead]
Prob a very unscientific way to test an image model. This would me likely because they have the reasoning turned down and let its instant output takeover
There's no good scientific way to test a closed-source model with both nondeterministic and subjective output.
This example image was generated using the API on high, not the low reasoning version. (it is slow and takes 2 minutes lol)
If the results are quantifiable/objective and repeatable it's scientific, how is it not scientific?
The reasoning amount is part of the evaluation isn't it?
This is the best kind of science there is: direct, empirical test.