Use llama.cpp? I get 250 toks/sec on gpt-oss using a 4090, not sure about the mac speeds