I had the best success yet earlier today running https://pi.dev with a local gemma4 model on ollama on my m4 Mac with 48GB ram. I think pi is a lot lighter than Claude code.

I didn’t think pi supported local models?

It does! Ollama provides a helper to launch it with the local model too: https://docs.ollama.com/integrations/pi

So you can do:

   ollama launch pi --model gemma4:26b
And it launches and points to the local model in one command. pi seems to do some setting caching too, because after doing the above once I can just do `pi` and it's already setup to the local model.

pi does, it can talk to any OpenAI API