It stays around 26Gb at 512x512. I still haven't profiled the execution or looked much into the details of the architecture but I would assume it trades off memory for speed by creating caches for each inference step

IDK. Seems odd. It’s an 11GB model, I don’t know what it could caching in ram.