Surely if running a local LLM is a goal, you're going to be disappointed by any small form factor desktop. Even an M4 Max is slow compared to a 5 year old rtx 3090, and the M4's only advantage is that it can run larger models, but very slowly [1].
[1] https://www.reddit.com/r/LocalLLaMA/comments/1jn5uto/macbook...