What’s the quality of the model output at this RAM / model selection? Local models fascinate me; I run Ollama on an M1 Max MacBook Pro with 64GB of RAM, but I am a little bit inexperienced with the ins and outs. Thank you for sharing!
What’s the quality of the model output at this RAM / model selection? Local models fascinate me; I run Ollama on an M1 Max MacBook Pro with 64GB of RAM, but I am a little bit inexperienced with the ins and outs. Thank you for sharing!
I specifically chose PrismML's 1-bit models because their tiny size allows them to actually fit on smaller hardware like the Pi. The 1.7B model is great for basic tasks and tool triggers, while the 4B model seems reasonable for some daily tasks, though it's much slower on this setup. If you try these models on your M1 Max, I assume they'll run incredibly fast. I previously tried them on a VPS and the inference speed was really good for my experiment.