What's the current situation for coding with Local LLM's on decent hardware? I have an M3 Max with 64 gb of ram and am thinking I should start looking at Ollama and Opencode? Is this a useful stack for smaller personal projects?

It’s getting there. You could give a try with qwen 3.6. It’s worth paying for better models in the cloud, but local models are now better than nothing.

One nice development recently was ollama's support for MLX optimization on Mac hardware. It's not obvious how to know you're using a model that works with it, yet, so it's rough around the edges.

https://ollama.com/blog/mlx

Use llama.cpp or better yet Unsloth Studio