Out of sheer curiosity, I put three screenshots of the noise into Claude Opus 4.1, Gemini 2.5 Pro, and GPT 5, all with thinking enabled with the prompt “what does the screen say?”.
Opus 4.1 flagged the message due to prompt injection risk, Gemini made a bad guess, and GPT 5 got it by using the code interpreter.
I thought it was amusing. Claude’s (non) response got me thinking - first, it was very on brand, second, that the content filter was right - pasting images of seemingly random noise into a sensitive environment is a terrible idea.
> pasting images of seemingly random noise into a sensitive environment is a terrible idea
BLIT protection. https://www.infinityplus.co.uk/stories/blit.htm
> pasting images of seemingly random noise into a sensitive environment is a terrible idea.
Only if your rendering libraries are crap.
I think they mean prompt injection rather than some malformed image to trigger a security bug in the processing library
The LLM is the image processing library in this case so you are both right :)