I think the core Bluesky team has been discussing ways to limit toxicity so I definitely think this is on their radar. Substack also leans into the idea that their platform manages social media toxicity better than other platforms, at least in their marketing copy. I think in 2025+ toxicity is a major dimension to evaluate social platforms on.
Substack is biased towards long form, and long form is inherently antifragile against toxicity. Although bad actors with LLMs could in theory change this.
I probably read more viewpoints I disagree with on Substack than anywhere else.