DeepSeek Coder 33B or Llama 3 70B with GGUF quantization (Q4_K_M) would be optimal for your specs, with Mistral Large 2 providing the best balance of performance and resource usage.
DeepSeek Coder 33B or Llama 3 70B with GGUF quantization (Q4_K_M) would be optimal for your specs, with Mistral Large 2 providing the best balance of performance and resource usage.