Yes. I would recommend a model with 16gb ram at least but I was able to run it on a MacBook air 8gb but it lagged for LLM assist.
You don't need to setup LLM locally, the tool does that. You can choose which model to go with. It has Gemma and Qwen supported now.