> They also included 2,000 prompts based on posts from the Reddit community r/AmITheAsshole, where the consensus of Redditors was that the poster was indeed in the wrong.
Sorry, anonymous people on reddit aren't a good comparison. This needs to be studied against people in real life who have a social contract of some sort, because that's what the LLM is imitating, and that's who most people would go to otherwise.
Obviously subservient people default to being yes-men because of the power structure. No one wants to question the boss too strongly.
Or how about the example of a close friend in a relationship or making a career choice that's terrible for them? It can be very hard to tell a friend something like this, even when asked directly if it is a bad choice. Potentially sacrificing the friendship might not seem worth trying to change their mind.
IME, LLMs will shoot holes in your ideas and it will efficiently do so. All you need to do ask it directly. I have little doubt that it outperforms most people with some sort of friendship, relationship or employment structure asked the same question. It would be nice to see that studied, not against reddit commenters who already self-selected into answering "AITA".
> Sorry, anonymous people on reddit aren't a good comparison.
Yeah especially on r/AmITheAsshole. Those comments never advocate for communication, forgiveness and mending things with family.
Additionally, I'm sure many posts and replies on r/AmITheAsshole are LLM-generated in the first place.
Well, because that's never the correct choice. There's a big big filter on people actually posting there. Any easy problems with obvious solutions never make it to there.
Think about it, how fucked does your relationship have to be to post on Reddit for advice?
Someone has a chart somewhere that shows responses in that subreddit getting more and more anti-conciliatory over time. I think it’s online misanthropy (measured by Reddit responses) increasing over time rather than it being objectively never the correct choice.
This wrongly assumes people are good at judging what easy problems are.
Not to mention nowadays an untold amount of posts to subreddits that invite commentary are made up stories from accounts trying to get engagement.
Yes, it is a toxic sub, where the notion that there can be greater happiness on the other side of forgiveness than cutting ties is all but absent.
To be fair, it’s easier to concisely explain cutting someone off than justifying forgiveness. And the latter will land with some people versus others, while the former will only be rejected by people who have themselves concluded a theory of forgiveness. As a result, the simpler pitch gets upvoted. Even if the majority would have been swayed by a collection of arguments the other way.
It’s a good theory. My theory is, for whatever reason, jaded, narcissistic, miserable people congregate in r/AITA and try to drag other people into their misery because that’s easier than accepting responsibility and doing something to change.
Before Reddit made hiding profiles easy you'd click on a user's unreasonably scorched earth advice to the OP, and find their post history is essentially going to every story they come across and advocating for scorched earth.
What are the chances you were seeing the anti-civ bots and now reddit makes them easier to hide? (And I'm not saying regular people acting like bots, but an anti-civ campaign.)
I believe this. There is a graph somewhere of the relationship subs tending towards breaking up over time.
I don't think this is necessarily that the advice is getting worse. My friends are pretty mature and stable people and I've found that they've had way more issues staying in relationships longer than they should've compared to breaking up earlier. Especially for relationships earlier in people's lives (where many people I know has a story about being in a relationship for way longer than they should've and seems often to be the ages of people asking for advice) erring towards breaking up seems prudent.
Not that these relationships subreddits are good (often it's obviously children trying to give advice they don't have the experience for) but I don't think that telling people to break up more is less accurate advice.
It's often that a lot of "NTA" answers are downright antisocial.
"No one owns you anything, you don't own anyone anything" mentality, without a crumb of social awareness.
“AI is nicer than the average redditor” would be a more accurate title
IMHO it's not about being nice. AITA threads show an interesting phenomenon of social consensus, I think the authors wanted to show that the LLMs they checked don't have that.
I don't think Reddit is a great place to determine social consensus for well adjusted people or representative of the average adult view. I never see people on Reddit have opinions of any the people I consider reasonable in real life and I don't mean politics I wouldn't know, I don't frequent political subreddits.
It seems fairly consistently miserable in any of the common high traffic subs and you have to get down to really niche communities to see what I consider reasonable behavior that matches the behavior of people I know in real life.
Pretty sure the average Redditor is AI now.
How the hell is a study on stanford.edu assuming posts on Reddit are genuine? That should be enough to get you kicked out of Stanford.
Though interestingly, the observed difference in assessment suggests (though does not prove) that sampled AITA posters are not one of these models. I guess it’s possible they have a very different prompt though…
Is it the _average_ redditor? The most upvoted would be even worse.
I would say people on /r/amitheasshole are more biased towards the poster, i.e. nicer.
There's plenty of those I've read where I thought it sounded like the poster was the asshole and the top replies were NTA.
r/AmItheAsshole is biased towards breaking off relationships rather than fixing them. They also hate social obligations.
e.g. If the OP is asking "I ghosted my friend in AA who insulted me during a relapse", Reddit would say NTA in a heartbeat, while the real world would tell OP to be more forgiving.
On the contrary, if the post was "the other kids at school refuse to play with my child", Reddit would say YTA because the child must've done something to incite being cut off.
Absolutely. I wonder how many parents have been no contacted, SOs broken off with, friendships broken because of the Reddit hivemind's attitude. Pretty sure it's doing a huge amount of societal damage.
I wouldn't blame reddit, it's what you get when you ask several thousand teenagers to give collective relationship advice.
“I got divorced based on advice from complete strangers on the internet, AITA?”
Yeah every single time I click on one of those posts the top comments are NTA. A couple times I tried randomly opening a few dozen posts and checking the top comments to see if I could find a single YTA and struck out.
Granted many of the OPs are very biased in the poster's favor. Most I've read fall into one of two buckets: either they want to gripe about some obviously bad behavior, or it's a controved and likely fake story.
It’s gendered, by the way
Many of the posts are A/B tests of a prior post where only the genders were flipped of the OP and antagonist to see how the consensus also flips
Are you saying there isn’t an actual sycophancy problem?
We are talking about overall patterns here, not the experience of a small subset of skilled and careful users.
>Obviously subservient people default to being yes-men because of the power structure. No one wants to question the boss too strongly.
This drives me nuts as a leader. There are times where yes, please just listen, and if this is one of those times, I'll likely tell you, but goddamnit, speak up. If for no other reason I might not have thought of what you've got to say. Then again, I also understand most boss types aren't like me, thus everyone ends up conditioned to not bloody collaborate by the time they get to me. It's a bad sitch all the way around.
Indeed. I directly ask my reports to discover and surface conflicts, especially disagreements with me, and when they do I try to strongly reinforce the behavior by commending and rewarding them. Could anyone recommend additional resources on this topic?
Simon Sinek has a lot of good content around this. Step one is building trust. People won’t speak up if they don’t feel safe doing so.
What's your research background in this area?
Not only that, but subreddits like r/AmITheAsshole are full of AI slop. Both in the comments and in the posts. It's a huge karma mining operation for bots.
That can be solved by filtering out any posts made after November 2022.
That's not a good solution. We don't use medical textbooks from 20 years go.
Strangers from the internet, bot or otherwise, are not your mental coach.
This is sort of funny. Given how common it is to spot bots on Reddit now, it seems like they are likely to completely overwhelm the site and drive away most of actual humans.
At which point the bots, with all of their karma will be basically worthless.
Kind of extra funny/sad that Reddit’s primary source of income in the past few years appears to be selling training data to AI labs, to train the Models that are powering the bots.
> At which point the bots, with all of their karma will be basically worthless.
Not really, it will still be kind of valuable for influence campaigns, a lot of people don't get it when there is a bit in the other side. Hell, a lot of times, I don't get it.
The upvotes ultimately train the bots, reenforcing the content posted. Even the most passive form of interaction has been co-opted for AI.
Plus, there's the disproportionate ratio of posters:commenters:lurkers. The tendency to comment over keeping ones thoughts to themself is a selection bias inofitself.
> This needs to be studied against people in real life who have a social contract of some sort... IME, LLMs will shoot holes in your ideas and it will efficiently do so.
The Krafton / Subnatuica 2 lawsuit paints a very different picture. Because "ignored legal advice" and "followed the LLM" was a choice. Do you think someone who has conversation where "conviction" and "feelings" are the arbiters of choice are going to buy into the LLM push back, or push it to give a contrived outcome?
The LLM lacks will, it's more or less a debate team member and can be pushed into arguing any stance you want it to take.