Why does this have to be true? For example, if you have a different LLM that is judging than the one being judged then their biases could at least be different. Also, as their reasoning abilities improve wouldn't LLM judges approach the abilities of human judges?

LLMs have positional, response length and hedge word biases (and that's just what's rigorously demonstrated in papers) that wash out differences between high performing answers as you approach the limit of your objective. Imagine if you were trying to optimize a function and the measurement function emitted random biased noise, at some point you wouldn't be able to accurately identify the impact of your changes.

Indeed - human judges suck on average. And you can prompt an llm judge to look for particular kinds of problems, then throw the ensemble of judges at an output to nitpick. (Essentially, bake in a diversity of biases through a collection of prompts.)