I just did some quick testing on my own benchmark that tests LLMs as customer support chatbots, and found out that deepseek-v4-flash (scored 90.2%) was better than qwen3.5-27b (89%) and qwen3.5-35b-a3b (89.1%) and roughly equal to gemini-3-flash-preview (90.5%), but deepseek-v4-flash had the lowest cost of all of them by far. Half the cost of gemini-3-flash and an order of magnitude less cost than the qwen models.
Have you noticed the deepseek-v4-pro performing worse than deepseek-v4-flash? It performed even worse than qwen3.5-27b. I found it surprising and I'm wondering if there is a bug on my software because I had to implement sending the `reasoning_content` otherwise the API failed with BadRequestError.
How can a medium-sized model like Deepseek-V4-Flash be cheaper than a much smaller models like Qwen3.5-35B-A3B.
It's five times bigger in both total and active parameters!