Which models do you use, and how do you run them?
I have a M3 max 64GB.
For VS Code code completion in Continue using a Qwen3-coder 7b model. For CLI work Qwen coder 32b for sidebar. 8 bit quant for both.
I need to take a look at Qwen3-coder-next, it is supposed to have made things much faster with a larger model.
I have a M3 max 64GB.
For VS Code code completion in Continue using a Qwen3-coder 7b model. For CLI work Qwen coder 32b for sidebar. 8 bit quant for both.
I need to take a look at Qwen3-coder-next, it is supposed to have made things much faster with a larger model.