> Goomba is assuming contradictions are coming from the same person, presumably b/c it's coming to the Goomba through the same app.

Its because it comes from the same political faction. In general people are open about A when A seems palatable, and openly B when B seems palatable, but they almost never admit to do that when its obviously wrong to do so.

That is the rational part of the fallacy, even if these are different sets of people you can still tell they are biased since they never appear in the threads where its obvious they are in the wrong.

For example, lets say in a thread where a white cop shoots a black guy you find a lot of republicans say "this is just statistics, nothing to see here". Then in another thread where a black cop shoots a white guy republicans pour in and argue this must be racism and we should investigate! Maybe it isn't the same set of people, but its still a strong sign of problematic bias that they only choose to speak up in those particular threads and not the others.

Every political side everywhere does this, and that is why people started calling that out.

This is just the fallacy. Political groups are coalitions not single monoliths.

[deleted]

Is it true that they don't appear in the threads where you feel it's obvious they're in the wrong? Or do they just get upvoted less in those threads so you don't see it when they do appear?

In general, hypocrisy is a pretty weak argument. It's an annoying personality trait, but consistency is a thing humans often fail at, and humans failing at holding consistent opinions is a failure of those humans, not the claims they're making. It's not quite as weak as the more non-sequitur kind of ad-hominem attack, because it does at least pertain to the argument being made, and kind of resembles a logical contradiction if you squint, but it seldom does a good job addressing the merits of the argument, rather than the arguer. It's a successful political tactic for the same reason ad hominem arguments in general are, of course, especially in the context of representative forms of government, where the person's character or competence is relevant when they're running for an office. Much less so in contexts where the merits of a position are being debated in abstract.

I think it's very silly to make the argument that "groupwise hypocrisy" is not a fallacy in such a conversation. In politics, the reality is that people have to form coalitions with people with whom they don't agree on everything, and non-political groupings are even more non-sensical, often holding people responsible for the opinions of other people who happen to share things like inborn characteristics. It's especially ridiculous to explain this with this idea that people are engaging in some kind of elaborate coordination to argue with you on the internet. Yes, some people, and indeed political parties, engage in that kind of behavior, and if you think you're arguing with something like a botnet, there are larger considerations to make about what you gain as an individual by trying to engage with such a machine at all. If I believe I'm arguing about the merits of an idea with an actual person, and I find myself reaching for something like "your group is collectively hypocritical on this issue" to make my argument, this is cause to reflect on whether I actually have any real arguments for my position, as that one is... well, essentially meaningless

I think you're trying to invoke what's commonly called a "motte-and-bailey" argument, where people argue for a maximally-defensible position when faced with serious criticism, but act as though they're proving a much less defensible version of their argument, often including a nebula of related ideas, in other contexts. This is something individuals and coordinated factions absolutely do, but again doesn't really support treating any grouping you want to draw of some kind of collective hypocrisy. Even assuming we care about hypocrisy, it seems like this kind of reasoning about nebulous groups that don't explicitly coordinate would allow making that argument about any position in any context, depending on how you draw the boundaries of the group that day. It's well-understood that you can go on the internet and find someone who believes just about any crazy thing you can think of, or find someone who makes the argument for any position poorly.

Are you a very smart writer or are you using smart tools, I'm not sure?

Wouldn't know if I'm a smart writer, but I see little value in writing with a model if that's what you're asking. Language models are good for searching, getting alright at structured outputs like code, and trash at meaningfully expressing my thoughts in prose. Frankly, it concerns me that people think vomiting their thoughts onto the internet could possibly benefit from computational assistance

>Frankly, it concerns me that people think vomiting their thoughts onto the internet could possibly benefit from computational assistance

it concerns you because you have a good authority over your spoken language, and like most people with those skills I presume the language flows easily from you.

that ability isn't guaranteed, for a lot of people expression is tough, and those people felt equally alienated when confronted with an essay of word salad about why their opinion is wrong.

An LLM is a tool. In the 90s I would read columns and editorials about the disgusting faux pas of replying to a wedding invitation via such a cheap trendy medium like internet e-mail , now you receive death certificates that way.

It's not all bad, simpletons can use LLMs to have the critiquing essays turned into 5 word ELI5 statements that they can become enraged over once all the nuance is stripped. That's fun!

Sure, it's a tool that I think that's not a particularly compelling use of. Like I can at least see an endpoint of slop code where the right guardrails and model improvements create a means by which people can ask their computers to do things in natural language, and semantic search is genuinely a novel and powerful capability. Maybe we even get other nice translation protocols to structured forms of language. But in a context where the premise is that we're trying to communicate with other humans, using a model that generates plausible prose is a mechanism that obfuscates rather than clarifies. I don't think it's fit to purpose for that thing any more than a hammer makes a good screwdriver. If it helps you to bounce your ideas off an LLM, by all means do so, but this will mostly just serve to homogenize the writing of everyone doing that. Possibly of value to some people, but not to me

It's almost as if factions are made up of different people with different opinions in a loose alliance.

But nah, clearly they're all goombas.

This is exactly it. You see it on HN all the time. You will debate someone. Then deep in thread, a second person appears with a gotcha. when you point out that the gotcha doesn't fit in with the prior argument, they point out that was a separate person. They knew damn well what they're doing with their little conniving deflection fuck-fuck game. They're acting for the same surrogate argument. The Goomba is real and the people playing the game are just too cowardly to be two-faced themselves so they act two-faced through a surrogate and deflect to the surrogate when it's pointed out.

You, uh, you alright there my dude?