It's so ridiculous that Google made a custom SoC for their phones, touting its AI performance, even calling it Tensor, and Apple is still faster at running Google's own model.

Google really ought to shut down their phone chip team. Literally every chip from them has been a disappointment. As much as I hate to say it, sticking with Qualcomm would have been the right choice.

It runs very fast on my Qualcomm Elite Gen 5 SoC Oppo Find N6

How many tokens per second? Also, does it get warm/hot?

If this Gemma tokenizer I found online is accurate then my Pixel 10 Pro XL is getting ~22 tok/s on Gemma 4 E2B using the NPU, vs. 40 tok/s is what people are saying the MLX version gets on iPhone.

Actually I found official performance numbers from Google saying iPhone gets 56 tok/s and Qualcomm gets 52. They don't even bother listing Tensor in their table. Maybe because it would be too embarrassing. Ouch! https://ai.google.dev/edge/litert-lm/overview