How much protection do platforms have against user media submissions? If you implement a dcma/illegal report button tbat instantly takes the media down, maybe even logically, is that sufficient?
How much protection do platforms have against user media submissions? If you implement a dcma/illegal report button tbat instantly takes the media down, maybe even logically, is that sufficient?
It might, but then you’ve created a whole new set of problems: if anyone can take down anyone else’s content with one click, they’ll do it against anybody they dislike just for the hell of it (this was the case on Tumblr for a brief period: the Report button almost automatically banned the user, until they immediately realized this was unworkable). So if you don’t want everyone to ban everyone, you need a moderation team anyway to handle false reports, and you’re right back where you started.
Agreed. I was mostly asking about any legal issues.
The problems are like you stated. We even see this happen with invalid dcma complaints in moderation-heavy environments. There are certainly safety rails such as rate limited reports per user, etc., but then you need some moderation anyway.
But if the legal requirement is, "take down media if the fbi comes knocking", maybe it's just easier to deal with it that way if there is no budget for moderation.
fyi it's DMCA