It'll still suck for CI users. What you'll find is that occasionally someone else on the same CI server will have recently downloaded the file several times and when your job runs, your download will go slowly and you'll hit the CI server timeout.

CI users should cache the asset to both speed up CI runs and not run up the costs to orgs providing the assets free of charge

that's working as intended then, you should be caching such things. It sucking for companies that don't bother is exactly the point, no?

It's not unreasonable for each customer to maintain a separate cache (for security reasons), so that each of them will download the file once.

Then it only takes one bad user on the same subnet to ruin the experience for everyone else. That sucks, and isn't working as intended, because the intent was to only punish the one abusive user.

That's why subnets have abuse contacts. If the network operators don't care they can't complain about being throttled/blocked wholesale.