I think the issue is that the (American) standardized tests don't differentiate well enough. About 10,000 American high school graduates earn a 36 on the ACT or 1580+ on the SAT each year. That's because the problems are much too easy—the very first round of MATHCOUNTS, a middle school math competition, is harder than the ACT or SAT math section. Rather than making the test harder, they make it trickier. It's like that exercise lots of us did in elementary school to learn to follow instructions, where they ask you to read through all the instructions first, ask you to do a bunch of random things, and then hidden in there somewhere is "ignore all the previous instructions and just write your name at the top of the paper". The test isn't hard, but you'll be prone to mess up if you haven't seen that style of testing before (for the SAT, it's 90sec/problem with problems that try to break your pattern recognition, e.g. what is 1 + 2 + 3 + 4 + 5 + 7 + 8 + 9 + 10?).

An 800 on the math section is not enough to even predict if someone made it to the AIME, but it is enough to predict that they spent several weeks taking SAT math section practice tests. It's clearly failing to be predicative of anything the top universities should be looking for. It doesn't mean all standardized tests have to be. The AMC (and then the AIME + USAMO) are standardized tests that universities like MIT do accept scores from, and they actually get useful information from.