Companies have to inform affected individuals of data breaches, especially when HIPAA gets involved. Brokers have to inform clients of transaction errors. Auto manufacturers have to inform owners of recalls. Retirement funds have to inform plan participants of lawsuits involving those funds.

You don't even have to invoke the idea that Meta is big enough to be regulated as a public utility for this to have broad precedent in favor of forcing a malicious actor to inform its victims that they might be entitled to a small fraction of their losses in compensation.

Well we aren’t discussing the government requiring meta to inform users. We are discussing whether meta can choose which private actors’ ads to allow. It would seem silly that a platform would be forced to allow all ads.

Aha, how clever. We aren't discussing whether they can be forced to display messaging; we're discussing whether they're going to later get slapped down for blocking that messaging.

I get that the distinction matters a bit from time to time (court cases keep blurring the line in the US though), but:

1. With all the other shit that makes it through the filter, this was pretty clearly a targeted, strategic takedown rather than some sort of broad "we don't allow bad ads on the platform." Allowing "all ads" isn't the thing being argued; it's allowing "this ad."

2. The non-offensive idea of "abusers shouldn't be allowed to deceive and gaslight their victims" is pretty strongly in favor of this being a bad move on Meta's part if it was an intentional act. Maybe it shakes out fine for them legally in this particular instance, but the fact that as a society we routinely require companies and individuals to behave with more appearance of moral standing than this suggests that blocking this particular ad is over the line, and it's neither naive nor utopianistic to think so. Even if it's legally in the light-grey, it's an abuse of power worth talking about, and hopefully it inspires more people to leave their platform.