I think you need to distinguish between complex systems, and byzantine systems. You can have complex systems where every piece shares a common goal, but feedback loops are hard. You can also have systems which, if a common goal was shared, wouldn't be that hard to understand, modelize and optimize, but where the actors of the system are not acting in good faith.
And I agree with the above poster: often, a problem is described as "hard" as a way to make an excuse for the agents. Sure, the problem is hard. The reason why it's hard isn't some esoteric arcane complexity, it's that some of the agents aren't even trying.