The debate round sounds good until you actually use it. I built internal tools for a 35-person team and the same thing always happens - models see each other's answers and just shuffle the phrasing around instead of actually changing their reasoning. What you're measuring is performance on persuasion, not on accuracy or clarity. The real question isnt whether Claude will convince Gemini to flip its position. Its whether having 200 models debate helps you make a better decision than asking one model well and checking its work yourself. I'd use this more as a way to find edge cases where models disagree wildly, not to find consensus.
I have had quite some interesting reads just looking at the reasoning to be honest. The frontier models seem to have relevant sounding arguments every time, its even hard sometimes to read through the bs , identify what its actually a good argument and what is an argument I would like to read.
The debate round is actually restricted to only 6 models otherwise I'd get out of hand both quality and financially. And changing position is just one feature of the debate. Seeing arguments from multiple sides is also quite nice, give it a spin!
https://opper.ai/ai-roundtable/questions/b6d098dd-c9f