Which local AI do you use? I am local-curious, but don’t know which models to try, as people mention them by model name much less than their cloud counterparts.
Which local AI do you use? I am local-curious, but don’t know which models to try, as people mention them by model name much less than their cloud counterparts.
I'm frequently rotating and experimenting specifically because I don't want to be dependent upon a single model when everything changes week-to-week; focusing on foundations, not processes. Right now, I've got a Ministral 3 14B reasoning model and Qwen3 8B model on my Macbook Pro; I think my RTX 3090 rig uses a slightly larger parameter/less quantized Ministral model by default, and juggles old Gemini/OpenAI "open weights" models as they're released.