I am basing it on benchmark numbers. It's compute is just too poor to be useful for LLMs or Image generation.

For example: For LLMs, it's easy to do the math, and see how long you will be waiting for 50k input tokens.