I specifically chose PrismML's 1-bit models because their tiny size allows them to actually fit on smaller hardware like the Pi. The 1.7B model is great for basic tasks and tool triggers, while the 4B model seems reasonable for some daily tasks, though it's much slower on this setup. If you try these models on your M1 Max, I assume they'll run incredibly fast. I previously tried them on a VPS and the inference speed was really good for my experiment.