I believe we need to strengthen 230, but with the added caveat that affected platform owners must stop gaming the algorithms, that it must require user-driven curation. Let me curate my own feed, stop shoving shit in front of my eyes. When you do so, you're making heavy editorial decisions, and should be open to liability.
This is really the essence of it. Section 230 is critical to a healthy internet, but there is large grey area between editorial and platform. Places like youtube, meta, X, etc. are pretending to be platforms when really they are algorithmic editors, gatekeepers, and curators. They are much more like traditional media newspapers than say your ISP, and they need to be treated as such.
A few years ago this seemed a bit too extreme for me. Now, with the web mostly burned down anyway, I see little to lose and lots to gain in a section 230 repeal. My, how the Overton Window changes on some ideas. And when it's changing on some things it tends to accelerate on others too, like a social momentum on reconsidering past norms.
My compromise pitch, since the "You need ID from your users" ship has sailed:
Companies are not liable if they have proper ID of the person who submitted the content and can provide that to a plaintiff. If they have not made a good-faith effort to know who submitted this info (like taking ID, not just an email address) then they're taking responsibility for the submitted content.
Which means sites that have responsible moderation can still allow anonymous contributions.
The real problem is the inherent asymmetry of legal battles, where the wealthiest can fight forever with endless motions and have near-total impunity while a legal action would basically nuke a normal person's life. Not to mention the fact that an international border can often make this whole conversation moot.
> Which means sites that have responsible moderation can still allow anonymous contributions.
Anonymous contributions, up to the point of somebody compromising the service? With the quantity of password hash thefts, I suspect we'll see even more ID thefts this way.
I can't imagine using any service that asks for ID, except perhaps from the well-established giants, so an exception for identifiability would effectively be a gigantic moat granted to the largest internet companies to keep out competition. Anything like that would need to be paired with massive anti-trust changes, as well as perhaps government take-over of the giants as utilities, none of which sounds very appealing...
That said, don't take any of my rambling as discouragement, your type of thinking is exactly what we need, we need massive amounts of policy discussion and your suggestion is very innovative.
That's basically how things used to work in Germany. It used to be that if someone torrented movies on your internet connection, you were fined. No ifs, no buts, they monitored 100% of the public torrents and courts agreed with 100% of the fines. And they didn't care who did it - if they didn't know (which is almost always true) they fined the owner of the internet connection. It was a really really bad law. For 10-15 years after every other country had public wifi hotspots, Germany didn't because the owner would get fined for every torrent. After a very long time, they eventually passed a law saying public wifi operators didn't have to pay.
One of my issues is the lack of liability in practice. The poster is technically liable but they're anon, behind proxies, foreign, etc. and unaccountable. It results in people being harmed online without recourse.
These companies should have a duty to know who their users are.
The main problem with 230 is that the courts have decided to treat it as if it removes all legal liability from online platforms, rather than just publisher liability. The way the text was written seems to be intended to protect platform operators from publisher liability but still have them under distributor liability. For example, if you own a bookstore and carry a book that says something defamatory, you can be held liable if you don't remove the book after being informed about its contents. However, a court case soon after 230 passed created the precedent that it absolves online platforms of all forms of liability. This means that if a platform knows it hosts illegal or defamatory content and doesn't take it down, they aren't liable and any legal cases against them will get thrown out due to 230. One of the authors of section 230 later said that "the judge-made law has drifted away from the original purpose of the statute."
>For example, if you own a bookstore and carry a book that says something defamatory, you can be held liable if you don't remove the book after being informed about its contents.
I don't think you can in the US. Maybe elsewhere, but in the US AFAIK the author is responsible for the content they publish, not the bookstores carrying the books.
>This means that if a platform knows it hosts illegal or defamatory content and doesn't take it down, they aren't liable and any legal cases against them will get thrown out due to 230.
No it doesn't. Section 230 doesn't allow sites to host illegal content, of course only "legality" within the framework of US law matters.
All it says is that the liability for user posted content lies with the user posting the content, not the platform hosting it. Which to me seems appropriate.
Throwing the baby out with the bathwater?
I believe we need to strengthen 230, but with the added caveat that affected platform owners must stop gaming the algorithms, that it must require user-driven curation. Let me curate my own feed, stop shoving shit in front of my eyes. When you do so, you're making heavy editorial decisions, and should be open to liability.
This is really the essence of it. Section 230 is critical to a healthy internet, but there is large grey area between editorial and platform. Places like youtube, meta, X, etc. are pretending to be platforms when really they are algorithmic editors, gatekeepers, and curators. They are much more like traditional media newspapers than say your ISP, and they need to be treated as such.
What about the internet today is healthy such that anyone could point to Section 230 as the reason why?
The internet is unhealthy today specifically because the law elevates platform editorializing to the same level as individual freedom of speech.
I agree, but I'm not sure the person I'm responding to would. I cannot imagine how anyone could describe today's internet as healthy.
That phrase does not mean what you think it means.
A few years ago this seemed a bit too extreme for me. Now, with the web mostly burned down anyway, I see little to lose and lots to gain in a section 230 repeal. My, how the Overton Window changes on some ideas. And when it's changing on some things it tends to accelerate on others too, like a social momentum on reconsidering past norms.
My compromise pitch, since the "You need ID from your users" ship has sailed:
Companies are not liable if they have proper ID of the person who submitted the content and can provide that to a plaintiff. If they have not made a good-faith effort to know who submitted this info (like taking ID, not just an email address) then they're taking responsibility for the submitted content.
Which means sites that have responsible moderation can still allow anonymous contributions.
The real problem is the inherent asymmetry of legal battles, where the wealthiest can fight forever with endless motions and have near-total impunity while a legal action would basically nuke a normal person's life. Not to mention the fact that an international border can often make this whole conversation moot.
> Which means sites that have responsible moderation can still allow anonymous contributions.
Anonymous contributions, up to the point of somebody compromising the service? With the quantity of password hash thefts, I suspect we'll see even more ID thefts this way.
I can't imagine using any service that asks for ID, except perhaps from the well-established giants, so an exception for identifiability would effectively be a gigantic moat granted to the largest internet companies to keep out competition. Anything like that would need to be paired with massive anti-trust changes, as well as perhaps government take-over of the giants as utilities, none of which sounds very appealing...
That said, don't take any of my rambling as discouragement, your type of thinking is exactly what we need, we need massive amounts of policy discussion and your suggestion is very innovative.
That's basically how things used to work in Germany. It used to be that if someone torrented movies on your internet connection, you were fined. No ifs, no buts, they monitored 100% of the public torrents and courts agreed with 100% of the fines. And they didn't care who did it - if they didn't know (which is almost always true) they fined the owner of the internet connection. It was a really really bad law. For 10-15 years after every other country had public wifi hotspots, Germany didn't because the owner would get fined for every torrent. After a very long time, they eventually passed a law saying public wifi operators didn't have to pay.
I like this compromise.
One of my issues is the lack of liability in practice. The poster is technically liable but they're anon, behind proxies, foreign, etc. and unaccountable. It results in people being harmed online without recourse.
These companies should have a duty to know who their users are.
The main problem with 230 is that the courts have decided to treat it as if it removes all legal liability from online platforms, rather than just publisher liability. The way the text was written seems to be intended to protect platform operators from publisher liability but still have them under distributor liability. For example, if you own a bookstore and carry a book that says something defamatory, you can be held liable if you don't remove the book after being informed about its contents. However, a court case soon after 230 passed created the precedent that it absolves online platforms of all forms of liability. This means that if a platform knows it hosts illegal or defamatory content and doesn't take it down, they aren't liable and any legal cases against them will get thrown out due to 230. One of the authors of section 230 later said that "the judge-made law has drifted away from the original purpose of the statute."
>For example, if you own a bookstore and carry a book that says something defamatory, you can be held liable if you don't remove the book after being informed about its contents.
I don't think you can in the US. Maybe elsewhere, but in the US AFAIK the author is responsible for the content they publish, not the bookstores carrying the books.
>This means that if a platform knows it hosts illegal or defamatory content and doesn't take it down, they aren't liable and any legal cases against them will get thrown out due to 230.
No it doesn't. Section 230 doesn't allow sites to host illegal content, of course only "legality" within the framework of US law matters.
All it says is that the liability for user posted content lies with the user posting the content, not the platform hosting it. Which to me seems appropriate.