Listing content alphabetically or chronologically is technically an "algorithm" too. What I'm specifically challenging here is the personalized algorithm designed to keep individual users on the platform based off a user profile influenced by countless active and passive choices the user has made over time. The type of HN algorithm that serves the same content to every user based off global behavior is fine in my book because it is both less exploitative of the user base and a reflection of that user base's proactive decisions in upvoting/downvoting content.
So if HN added anything personalized, like allowing you to show fewer stories on topics you dislike, it would lose protection? I can't get on board with that.
I also think it would be extremely unpopular. People like their recommendation engines. They want Netflix to show them more similar shows. They want Reddit to help them find more similar subreddits. I know there are HN users who don't want any of these recommendation engines, but on the whole people actually want them.
>People like their recommendation engines.
People liked cigarettes too.
>They want Netflix to show them more similar shows.
Perhaps that example was a little too revealing on your end. Netflix doesn't have/need Section 230 protections and they're doing fine.
I'm not suggesting these algorithms should be illegal, just that Section 230 protections were defined too broadly because they predated the feasibility of these type of algorithms. These platforms would be free to continue algorithmic promotion, but I believe these algorithms would be less harmful if the platforms had to worry about potential legal liability.
Think YouTube and copyright for comparison. The DMCA is far from perfect, but we have YouTube as an example of a platform that survived and even thrived in the transition from a world that didn't care about copyrighted internet video to one in which they that needed to moderate with copyright in mind.
> People liked cigarattes too.
Cigarettes weren’t made illegal. Cigarette companies are not liable for their user’s choice to consume them. What’s your point?
> Perhaps that example was a little too revealing on your end. Netflix doesn't have/need Section 230 protections and they're doing fine.
Perhaps it was a little too revealing on your end that you conveniently ignored my other example of Reddit.
If you need to cherry pick to make your point it doesn’t look very strong.
I still don’t see consistency in your argument that Section 230 should still apply to Hacker News but not, for example, Reddit, simply because one of them allows users to personalize the content they see.
> Cigarette companies are not liable for their user’s choice to consume them.
They kind of were. Not completely liable, but partially. Because... um, well, uh, yeah, they are. They are literally liable.
If you produce cigarettes, you are partially responsible for people smoking. Smoking is also not a "choice", come on now. The only people who believe that are people trying to sell you cigarettes or people who have never smoked.
That's why you can't advertise cigarettes anywhere anymore and they're very hard to find. And, when you do find them, the box tells you "hey please don't smoke this". R.J. Reynolds didn't do that by fucking choice, we forced them.
> They kind of were. Not completely liable, but partially. Because... um, well, uh, yeah, they are. They are literally liable.
Cigarette companies are not legally liable for the consequences their users encounter.
It’s really hard to have an actual discussion about anything when people are just making up their own definitions.
Cigarette companies paid billions, and continue to pay, for the societal harm they cause. That's a liability. They're not legally liable in the sense that nobody is going to jail. But they have financial liabilities. Because they do, literally, cause financial harm.
I don't think people really understand just how harshly we ran Tobacco companies into the ground. Many pay more per cigarette for liability than they pay to make the cigarette.
In the narrow definition of the term you are using, cigarette companies were found legally liable.
The whole reason they got sued and regulated was because they hid the fact that they knew their product was causing cancer in its users.
There’s additional regulation on cigarettes, which also includes higher taxes on its sale.
We regularly put limits on industries which create externalities that have to be borne by the exchequer.
> Cigarette companies are not legally liable for the consequences their users encounter.
Ok! But they do have to follow a bunch of extra laws that cost them a ton of money and/or users.
Therefore the same can apply to social media algorithm companies.
The one extreme example, is just like cigarettes, there could be 18+ age verification for social media. There a big deal.
This is the type of comment that suggests you aren't engaging with what I'm saying beyond a superficial level. My argument is consistent. I'm not cherry-picking examples. The differentiator I'm criticizing is the personalized nature of the algorithms. But rather than engaging with the merit of that distinction, you're acting as if there is no distinction at all. I'm not sure if there is much point in contuning the conversation from there.
I think the other person's issue with your position is that the distinction is entirely arbitrary. You're not giving any reasons why the demarcation line for which feed algorithms are OK and which are not is there instead of anywhere else. It seems to be just "Facebook and TikTok are bad; Their feeds are personalized recommendation engines; Therefore personalized recommendation engines are bad, and other feed algorithms are OK".
>I think the other person's issue with your position is that the distinction is entirely arbitrary.
Basically all laws related to speech are abitrary. Can you define a clear and self-evident line between pornography and art as an example? Or do you agree with the Supreme Court that we just "know it when [we] see it"?
>You're not giving any reasons why the demarcation line for which feed algorithms are OK and which are not is there instead of anywhere else.
Let me just copy and paste what I said before: "The type of HN algorithm that serves the same content to every user based off global behavior is fine in my book because it is both less exploitative of the user base and a reflection of that user base's proactive decisions in upvoting/downvoting content." I can understand if one of you want to challenge that line of thought, but you both acting like I didn't give any reasoning at all is bizarre and gives me the impression that you aren't actually reading what I'm writing.
> Basically all laws related to speech are abitrary.
True. This is a fair point. But the expected counter argument would be that the exact line isn't the issue instead it's the justification for the principle.
IE why is personalized algorithms more dangerous than general ones.
My answer (because I mostly agree with you) is that the difference is that personalized algorithms almost feel like brain hacking. And this brain hacking simply doesn't work at scale when applied to vague general algorithms.
>Basically all laws related to speech are abitrary. Can you define a clear and self-evident line between pornography and art as an example? Or do you agree with the Supreme Court that we just "know it when [we] see it"?
I'm a free speech absolutist, so I personally don't find which laws already exist on the matter to be a compelling argument. If it was up to me, I'd get rid of any such laws.
>The type of HN algorithm that serves the same content to every user based off global behavior is fine in my book because it is both less exploitative of the user base and a reflection of that user base's proactive decisions in upvoting/downvoting content.
The argument hinges entirely on the relative exploitativeness of different feed algorithms, but that metric is merely asserted with no support.
>I'm a free speech absolutist
Typically free speech absolutism leads individuals into logical traps they find difficult to dig themselves out of.
But we don't even need that in this case. Private property can have all kinds of restrictions put on it based on the potential dangers and harms it causes. This in fact is one of the most common attacks on speech I see right now (Meta et el) that they will just put age requirements on sites.
>Typically free speech absolutism leads individuals into logical traps they find difficult to dig themselves out of.
Yes, "free speech absolutists" tend to define these terms in ways to hide the true arbitrary nature of their beliefs. The obvious test case is do they believe in legalizing CSAM. Either they answer "yes" and ostracize themselves from almost all of society or they say "no" and have to come up with arbitrary rules why this specific content doesn't count as speech. Either way, self-applying the label is its own red flag.
I don't really understand what your point is.
If I understand the point correctly, it's that regulating the algorithms of Meta et al does not curtail your free speech, so it's a moot argument
I wasn't the one who brought up free speech into the discussion; slg was. That aside, whether it curtails it or not would depend on how one defines "speech". Even if the particular way in which a website displays information is not speech, I still think it would be an overreach for a government to legislate how websites are allowed to function. If I as a user want to see a feed populated by recommended content, and the site's operators want to show it to me, what business does the government have stepping into our interaction?
Cigarettes and their externalities are analogous and that's discussed over here
https://news.ycombinator.com/item?id=47419870
I don't believe the argument was that personalized algorithmic recommendations need be forbidden per se, but that doesn't mean that should be the default, nor that companies should be able to wash their hands (under section 230) of what they promote
Like the other person said, cigarettes are not illegal. Are we really going to pretend that whatever harm TikTok causes is comparable to lung cancer?
Like the other posts you're arguing against have said, the argument is not that social media or personalized algorithms should be "illegal"
And "are we going to pretend" is a non-argument that works both ways: "Are we really going to pretend individualized algorithmic social media hasn't caused harm to society on par with smoking?" would be equally unconvincing
There's no pretending, there. It just hasn't.
What do you think about the case of Lucy Connolly, who, during a riot where rioters were burning down hotels housing immigrants, tweeted that people should burn down hotels housing immigrants and was arrested for that?
I already stated what my position is. Why do you need to ask about specific cases? Are you trying to look for gotchas?
Of course Section 230 would apply to both sites, but only to the user-generated part of each site, because that's what Section 230 says.
That is not comparable because of the little you have over the algorithm for the other cases. On bandcamp, you can select the genre and a sorting criteria and have very good control over the list. But on Spotify, it’s very obscure, with things you’ve never asked for being in front even before your own library.
for me, the distinction is control. If I'm filtering out things I don't like, I'm in control. If the system is filtering out items or promoting items, I think it fair it take on more responsibility.
A system doesn't want your feed empty because they want your eyes, but because money. When they choose what goes into the feed, they should gain increased liability for what comes out. The risk they take on for more money. If that money is not worth it, don't recommend.
I enjoyed the internet in the beforetimes. Recommendations were limited to "this is objectively related, this is new, this is upvoted, this is by someone you follow or someone they follow, or this is randomly chosen." I still feel there is some liability there, but it is less than when it changes to "this is something we have determined we should show you based on your personal past behavior." That feels different than liking a category when the meta-categories are picked for you. Especially when those meta-categories allow for things you would not want to opt in to, like doomscroll material.
I like some of the stuff I get algorithmically. I never would have searched for a soul cover of Slim Shady, but I'm glad I found it. And I'm glad I found knot tying videos. I think there is space for fancy feeds. But I think it should come closer to being a publisher. This _will_ depress throughput creation if things all have to be monitored which will change the economies and maybe that means some businesses can't exist as they do today. I'd likely pay a subscription to a LearnTok that had curated, quality material.
I'm paying for Netflix to do that as a feature. Instagram uses that to drive engagement to sell ads. Disabling personalized content on Netflix is a revenue-neutral choice. On Instagram, that would mean their ad revenue takes a huge dive. Apples aren't oranges.
Netflix does it to drive engagement as well.
1.) I do not know anyone who would particularly like netflix recommendation algorithm.
2.) Netflix algorithm is not relevant to "Section 230 protections", because it does not contain any data from third parties. All of that is Netflix content.
I can get on board with it for sure.
Theres a paper that studied the spread of misinformation online, back before COVID - they found that messages cascaded through more science and research oriented networks differently than they cascaded through conspiracy communities.
Popularity is not a sign of Signal. It’s a sign of being able to scratch the limbic system and zeitgeist at the same time.
For a site like HN, popularity isn’t a good predictive signal.
But algorithmic feeds can actually be useful for discovery of related material - I want Youtube to show me more Japanese jazz and video essays about true crime based on my watch history, I wanted Twitter to show me more accounts from writers and game developers because I follow them (before the platform went full Nazi) and I like that Facebook shows me people and information from my local area. Forcing all platforms to use only alphabetical or chronological feeds because of the exploitative way some platforms use algorithms seems awfully close to the "banning math" argument people used to use about cryptography and DRM, and it would remove a lot of legitimate use from the internet.
It's all about who controls the algorithm. A sensible approach would be to decouple recommendations from platforms, to treat them like plug-ins that the user must be allowed to add or disable. You want to use YouTube's recommendation algorithm on YouTube? Great, but there needs to be an off-switch and a way to change over to another provider. This is classic anti-trust stuff, breaking up a sector into interoperable pieces.
The anti-trust argument doesn't work for me. Neither Youtube nor any other single platform represent a "sector" in the way Standard Oil or Ma Bell represented a "sector", they don't "control the algorithm" in any sense beyond implementing code on their site. Certainly not in the way that a monopoly preventing other entities from competing against it by controlling access to some physical resource. Other video hosting sites besides Youtube exist, other social media platforms exist, so competition exists.
And besides, what's likely to happen is that you'll only have a few "algorithm providers" controlling access the entire web which only centralizes it even more.