If this Gemma tokenizer I found online is accurate then my Pixel 10 Pro XL is getting ~22 tok/s on Gemma 4 E2B using the NPU, vs. 40 tok/s is what people are saying the MLX version gets on iPhone.
Actually I found official performance numbers from Google saying iPhone gets 56 tok/s and Qualcomm gets 52. They don't even bother listing Tensor in their table. Maybe because it would be too embarrassing. Ouch! https://ai.google.dev/edge/litert-lm/overview
If this Gemma tokenizer I found online is accurate then my Pixel 10 Pro XL is getting ~22 tok/s on Gemma 4 E2B using the NPU, vs. 40 tok/s is what people are saying the MLX version gets on iPhone.
Actually I found official performance numbers from Google saying iPhone gets 56 tok/s and Qualcomm gets 52. They don't even bother listing Tensor in their table. Maybe because it would be too embarrassing. Ouch! https://ai.google.dev/edge/litert-lm/overview