I think they mean prompt injection rather than some malformed image to trigger a security bug in the processing library

The LLM is the image processing library in this case so you are both right :)