I have been kicking the tires for about 40 minutes since it downloaded and it seems excellent at general tasks, image comprehension and coding/tool-calling (using VLLM to serve it). I think it squeaks past Gemma4 but it's hard to tell yet.
FYI they also released FP8 quants, and those should be faster on your setup (we have the same). As long as you keep kv at 16bit, FP8 should be close-to-lossless compared to 16bit, but with more context available and faster inference speed.
An "obvious" point to make is that it is not particularly usable on a unified memory machine. Only getting 9 tok/s (for Q6 quants) using a Macbook M4 Pro 48GB memory (though with GGUFs, not mlx).
The quality seems fine, but the 9 tok/s mean I only tried it out briefly.
I have been kicking the tires for about 40 minutes since it downloaded and it seems excellent at general tasks, image comprehension and coding/tool-calling (using VLLM to serve it). I think it squeaks past Gemma4 but it's hard to tell yet.
good to hear! Do you mind sharing your setup and tokens / seconds performance ?
I'm running the unquantized base model on 2xA6000s (Ampere gen, 48GB each). Runs at about 25 tokens/second.
FYI they also released FP8 quants, and those should be faster on your setup (we have the same). As long as you keep kv at 16bit, FP8 should be close-to-lossless compared to 16bit, but with more context available and faster inference speed.
An "obvious" point to make is that it is not particularly usable on a unified memory machine. Only getting 9 tok/s (for Q6 quants) using a Macbook M4 Pro 48GB memory (though with GGUFs, not mlx).
The quality seems fine, but the 9 tok/s mean I only tried it out briefly.