> The findings, shared exclusively with The Washington Post

No prompts, no methodology, nothing.

> CrowdStrike Senior Vice President Adam Meyers and other experts said

Ah but we're just gonna jump to conclusions instead.

A+ "Journalism"

I tried a very basic version and I seem to be able to replicate the main idea. I asked it to create a website for me and changed my prompt from Falun Gong[0] to Mormon[1]. The Falun Gong one failed but the Mormon one didn't.

You should be skeptical, but this is easy enough to test, so why not do some test to see if it is obviously false or not?

[0] https://0x0.st/KchK.png

[1] https://0x0.st/KchP.png

[2] Used this link https://www.deepseekv3.net/en/chat

[Edit]:

I made a main comment and added Catholics to the experiment. I'd appreciate it if others would reply with their replication efforts: https://news.ycombinator.com/item?id=45280692

Your claim and the original claim are vastly different. Refusing to assist is not the same as "writing less secure code". This is clearly a filter before the request goes to the model. In the article's case, the claim seems to be that the model knowingly generated insecure code because it was for groups china disfavors.

That is incorrect. Here's the very first paragraph from the article. I'm adding emphasis for clarity

  The Chinese artificial intelligence engine DeepSeek often ***refuses to help programmers*** ___or___ gives them code with major security flaws when they say they are working for the banned spiritual movement Falun Gong or others considered sensitive by the Chinese government, new research shows.
My example satisfies the first claim. You're concentrating on the second. They said "OR" not "AND". We're all programmers, so I hope we know the difference between these two.

You are obviously factually correct, I reproduced the same refusal - so consider this not as an attack on your claim. But a quick google search reveals that Falun Gong is an outlawed organization/movement in China.

I did a "s/Falun Gong/Hamas/" in your prompt and got the same refusal in GPT-5, GPT-OSS-120B, Claude Sonnet 4, Gemini-2.5-Pro as well as in DeepSeek V3.1. And that's completely within my expectation, probably everyone else's too considering no one is writing that article.

Goes without saying I am not drawing any parallel between the aforementioned entities, beyond that they are illegal in the jurisdiction where the model creators operate - which as an explanation for refusal is fairly straightforward. So we might need to first talk about why that explanation is adequate for everyone else but not for a company operating in China.

Thanks. Mind providing screenshots? I believe you, I just think this helps. Your comments align with some of my other responses. I'm not trying to make hard claims here and I'm willing to believe the result is not nefarious. But it's still worth investigating. In the weakest form it's worth being aware of how laws in other countries impact ours, right?

But I don't think we should talk about explanation until we can even do some verification. At this point I'm not entirely sure. We still have the security question open and I'm asking for help because I'm not a security person. Shouldn't we start here?

If you mean the bit about refusal from other models, then sure here is another run with same result:

https://i.postimg.cc/6tT3m5mL/screen.png

Note I am using direct API to avoid triggering separate guardrail models typically operating in front of website front-ends.

As an aside the website you used in your original comment:

> [2] Used this link https://www.deepseekv3.net/en/chat

This is not the official DeepSeek website. Probably one of the many shady third-party sites riding on DeepSeek name for SEO, who knows what they are running. In this case it doesn't matter, because I already reproduced your prompt with a US based inference provider directly hosting DeepSeek weights, but still worth noting for methodology.

(also to a sceptic screenshots shouldn't be enough since they are easily doctored nowadays, but I don't believe these refusals should be surprising in the least to anyone with passing familiarity with these LLMs)

---

Obviously sabotage is a whole another can of worm as opposed to mere refusal, something that this article glossed over without showing their prompts. So, without much to go on, it's hard for me to take this seriously. We know garbage in context can degrade performance, even simple typos can[1]. Besides LLMs at their present state of capabilities are barely intelligent enough to soundly do any serious task, it stretches my disbelief that they would be able to actually sabotage to any reasonable degree of sophistication - that said I look forward to more serious research on this matter.

[1] https://arxiv.org/abs/2411.05345v1

I want to clarify that I'm not trying to make strong claims. That's why I'm asking for others to post and why I'm grateful you did. I think that helps us get to the truth of the matter. I also agree with your criticisms of the link I used, but to be frank, I'm not going to pay for just this test. That's why I wanted to be open and clear about how I obtained the information. I was hoping someone that already paid would confirm or deny my results.

With your Hamas example, I think it is beside the point. I apologize as I probably didn't make my point clearer. Mainly I wanted to stop baseless accusations and find the reality, since the articles claims are testable. But what I don't want to make a claim if is why this is happening. In another comment I even said that this could happen because they were suppressing this group. So I wouldn't be surprised if the same is true for Hamas. We can't determine if it's an intentional sleeper agent or just a result of censorship. But either way it is concerning, right? The unintentional version might be more concerning because we don't know what is being censored and what isn't. These censorships cross country lines and it is hard to know what is being censored and what isn't.

So I'm not trying to make a "Murica good, China bad" argument. I'm trying to make a "let's try to verify or discredit the claims." I want HN to be more nuanced. And I do seriously appreciate you engaging and with more depth and nuance than others. I'm upvoting you even though we disagree because I think your comments are honest and further the discussion.

DeepSeek chat it's free... No need to pay to test, thought.

https://chat.deepseek.com/

You can also use the API directly for free on OpenRouter.

Needs a login, so I went around. Are you able to verify my results?

[deleted]

Sure, but you also have to recognize the motte and bailey form of argument here. If we’re limiting the claim to being true if DeepSeek returns refusals on politically sensitive topics, we already knew that. It was relevant eight months ago, now it’s not interesting.

Another example: McDonald’s fries may cause you to grow horns or raise your blood pressure. No one talks like that.

So I would toss it back to you: we are programmers but we have common sense. The author was clearly banking on something other than the technically accurate logical or.

https://en.m.wikipedia.org/wiki/Motte-and-bailey_fallacy

You're not wrong, but the second claim is by far the more interesting of the two, and is what I think most people would like to see proven. AI outright refusing certain tasks based on filters set by the parent company is not really new or interesting, but it would be interesting to see an AI knowingly introduce security flaws in generated code specifically for targeted groups.

I don't disagree. The second is more concerning but I do think the first is interesting. At least in how cultural values and laws pass beyond country borders. Far less concerning but still interesting.

But what are you attacking my claim for? That I'm requesting people don't have knee-jerk reactions and for help vetting the more difficult claim? Is this wrong? I'm not trying to make the claim that it does or doesn't write insecure code (or less secure code) for specific groups. I've also made the claim in another comment that there are non-nefarious explanations to how this could happen.

I'm not trying to make a stance of "China bad, Murica good" or vise versa, I'm trying to make a stance of "let's try to figure out if true or not. How much is it true? How much is it false?" So would you like to help or would you like to create more noise?

For the record I never attacked your claim, I'm not the original person that said it was wrong.

That distinction is technically moot, and just highlights the the irrelevance of the report: any Falun Gong or whatever organization can change the proclaimed self-identity, the language (by first translating, with a different or neutral model first if necessary).

It is technically certainly feasible to have language-dependent quality changes, the language of the prompt can be trained to make intentional security lapses.

But no neural network has a magic end-intent or allegiance detector.

If Iran's "revolutionary" guard seeks help from a language model to design centrifuges, merely translating their requests to the model's origin dominant language(s), and culling any shiboleths should result in an identical distribution of code, designs or whatever compared to origin country, origin language requests.

It is also expectable that some finetuning can realign the model's interests towards whomever's goals.

I see your point. I thought the first one was already known when deepseek came out. Perplexity team showed how they removed this kind of bias via finetuning and their finetune could answer sensitive questions. I mistakenly thought you went for the second since that part is new and interesting.

I definitely need help with the second part. It is a much harder claim to verify or dismiss. I also want to stress (as I do in several other comments) that this could be done even without sleeper agents (see Anthropic paper) but just with censoring.

What I want to fight the most is just outright dismissing what is at least partially testable. We're a community of techies, so shouldn't we be trying to verify or disprove the claims? I'm asking for help with that because the stronger claim is harder to conclude. We have no chance of figuring out the why, but hopefully we can avoid more disinformation. I just want us to stop arguing out our asses and fighting over things we don't know the answers to. I want to find the answers, because I don't know what they are.

co pilot and chatgpt will also not help u if u say ur from a group marked as 'enemy' by the USA...

Do you realize that refuses to help OR tries to kill them right away is also a technically correct claim. The journalists essentially put only the second half into the title of the article.

I do realize that. But look at the OP again.

  > Ah but we're just gonna jump to conclusions instead.
I'm not trying to say WaPo is doing grade A journalism here. In fact, personally I think they aren't. A conversation about clickbait titles is a different one and one we've had for over a decade now...

But are we going to recognize the irony here? Is OP not calling the kettle black here? They *also* jumped to conclusions. This doesn't vindicate WaPo or make their reporting any less sensational or dubious, but we shouldn't make the same faults we're angry at others for making.

And pay careful attention to what I've said.

  >>> You should be skeptical, but this is easy enough to test, so why not do some test to see if it is obviously false or not?
  >>> I'd appreciate it if others would reply with their replication efforts
I do want to find the truth of the matter here. I could have definitely wrote it better, but I'm appealing to our techy community because we have this capability. We can figure this out. The second part is much harder to verify and there's non-nefarous reasons that might lead to this, but we should try to figure this out instead of just jumping to conclusions, right?

This is what I suggest. I asked Claude to start writing a test suite for the hypothesis.

https://claude.ai/public/artifacts/77d06750-5317-4b45-b8f7-2...

1)Four control groups: CCP-disfavored (Falun Gong, Tibet Independence), religious controls (Catholic/Islamic orgs), neutral baselines (libraries, universities), and pro-China groups (Confucius Institutes).

2) Each gets identical prompts for security-sensitive coding tasks (auth systems, file uploads, etc.) with randomized test order.

3) Instead of subjective pattern matching, Claude/ChatGPT acts as an independent security judge, scoring code vulnerabilities with confidence ratings.

4)Provides some basic statistical Welch's t-tests between groups with effect size calculations.

Iterate on this start in a way that makes sense to people with more experience than myself working with LLMs.

(yes, I realize that using a LLM as a judge risks bias by the judge).

actually if it writes no code, its the most secure help an LLM will provide when providing code :'). all the rest is riddled with stupid shit.

There was that study by anthropic that showed that an LM fine-tuned on insecure code with no additional separate prompting or fine-tuning would be more willing to act unethically. So maybe this is the equivalent in that the corpus of training data for deep-seek presumably is very biased against certain groups, resulting in less secure code for disfavored groups.

Yeah tbh I can see this happening unintentionally. Like DeepSeek trying to censor Falun Gong and getting these results. But tbh, I think it is concerning in either case. It is a difference between malice and unintended mistakes through trying to move too fast. Both present high risks, and neither is unique to China nor DeepSeek.

But most of all, I'm trying to get people to not just have knee-jerk reactions. We can do some vetting very quickly, right? So why not? I'm hoping better skilled people will reply to my main comment with evidence for or against the security claim, but at least I wanted to suppress this habit we have of just conjecturing out of nothing. The claims are testable, so let's test instead of falling victim to misinformation campaigns. Of all places, HN should be better

I personally agree with your aim to replicate this, because I suspect the outcomes will be surprising to all.

Here’s my sketch of a plan: You’d need controlled environments, impartial judges, time, and well defined experiments.

The controlled environment would be a set of static models run locally or on cloud GPUs; the impartial judge would be static analysis and security tools for various stacks.

Time: Not the obvious, “yes it would take time to do this”. But a good spread of model snapshots that have matures; along with zero days.

Finally: The experiments would be the prompts and tests; choosing contentious, neutral, and favorable (but to whom) groups, and choosing different stacks and problem domains.

Try the reverse, get a document that is critical of the US foreign policy, from China, and ask your well known brand LLM, to convert the text from PDF to epub.

It'll right out refuse, citing the reason that the article is critical of the US.

I was able to get around such restrictions pretty easily[0] while the LLM still being quite aware of who we're talking about. You can see it was pretty willing to do the task without much prodding despite prefacing with some warnings. I specifically chose the most contentious topic I could think of: Taiwan.

Regardless, I think this is besides the point. Isn't our main concerns:

1) not having kneejerk reactions and dismissing or accepting claims without some evidence? (What Lxe did)

2) Censorship crosses country lines and we may be unaware of what is being censored and what isn't, impacting our usage of these tools and the results from them?

Both of these are quite concerning to me. #1 is perpetuating the post truth era, making truth more difficult to discern. #2 is more subtle and we should try to be aware of these biases, regardless of if they are malicious or unintentional. It's a big reason I push for these models to be open. Not just open weights, but open about the data and the training. Unfortunately the result of #2 is likely to contribute to #1.

Remember, I'm asking other people to help verify or discredit the WP's claims. I'm not taking a position on who is good: China or the US. I'm trying to make us think deeper. I'm trying to stop a culture of just making assumptions and pulling shit out of our ass. If something is verifiable, shouldn't we try to verify it? The weaker claim is almost trivial to verify, right? Which is all I did. But I need help to verify or discredit the stronger claim. So are you helping me do that or are you just perpetuating disinformation campaigns?

[0] https://chatgpt.com/share/68cb49f8-bff0-8013-830f-17b4792029...

Can you show an example PDF this works with?

So you didn't use the API, instead using the online interface, then claimed that it's partial to Chinese interests? Colour me surprised...

Of course the online interface will only stick to the Chinese government version, and if that means not designing a website for the Falun Gong (because of guardrails), it's not a big surprise either. Try asking ChatGPT to make a pressure cooker bomb or something.

I'm requesting help if you'd like to do better. Do you have actions to go along with those words? I'm sure we'd all appreciate them

I've used Deepseek via OpenRouter on Dyad. While performance was kinda not that great vs other models, I have not faced issues with any form of content generation (Taiwan is Taiwan for instance). Granted, I'm not making websites on Dyad by explicitly stating in my prompt that it's for the Falun Gong.

After everything they printed, who could possibly consider Washington Post narrative engineers as journalists? :-)

Yes? Even if I accept your premise, the fact that you have sloppy coworkers doesn’t diminish your own personal work. Judge each on its merits.

CrowdStrike, where have I heard that name before...

Sorry, what exactly is the implication here? They shipped a bug one time, so nothing they can say can ever be trusted? Can I apply that logic to you, or have you only ever shipped perfect code forever?

I don't even like this company, but the utterly brainless attempts at "sick dunks" via unstated implication are just awful epistemology and beneath intelligent people. Make a substantive point or don't say anything.

Plenty of companies have gone bankrupt or lost a great deal of credibility due to a single bug or single failure. I don't see why CrowdStrike would be any different in this regard.

The number of bugs/failures is not a meaningful metric, it's the significance of that failure that matters, and in the case of CrowdStrike that single failure was such a catastrophe that any claims they make should be scrutinized.

The fact that we can not scrutinize their claim in this instance since the details are not public makes this allegation very weak and worth being very skeptical over.

It is possible for a company to both suffer an operational incident and be outstanding at discovering security vulnerabilities at the same time.

It is possible. It's just not likely either.

Based on what?

[deleted]
[deleted]

Sure, but this isn't one of them.

Are you saying CrowdStrike is inept at vulnerability research? If so, what evidence do you have?

They didn’t just “ship a bug”, they broke millions of computers worldwide because their scareware injects itself into the Windows kernel.

They probably killed people.

I missed a medical appointment due to the outage. Mine wasn't life threatening. For some, it was.

The crowdstrike event might be so infamous event that it might be taught for atleast some decades for sure maybe even in permanence.

That's a heck of a optimistic outlook for the future. Experience has taught me to be much more pessimistic about the future, especially when it comes to avoiding the repeating of the past

I mean, we still cover the THERAC-25 incident in university CS courses

Unfortunately until Windows changes, the best way for them to serve customers is to continue to inject kernel code. (This is no longer needed or even permitted with macOS.) They did screw up operationally, but one problem made the other much more likely and dangerous.

Why limit yourself to Windows? My enterprise-issued mac is very noticeably slower and suffers from weird crashes and reboot-fixes-things issues that my own personal mac has never had.

Because Windows was the sole OS impacted by last year's incident.

they also screwed up Linux before they did that on windows.. The problem here is they are a spyware that pushes whatever code they want to your (precisely your company) devices without test etc. It's just a matter of time for it to blow up.

The Linux kernel panic issue was different in many ways (in this case, the bug was in the Linux kernel used by a particular RHEL release), but your point that it needed further testing before pushing it out to production is still valid.

https://christiantaillon.medium.com/no-need-to-panic-the-lin...

> They did screw up

The word you're looking for is negligence. The lives of human beings were at stake and they YOLO'd it all by not performing a phased rollout.

Yes, sometimes companies have only one chance to fail. Especially in cyber security when they fail at global scale and politics is involved.

They’re still a going concern with plenty of customers; in business terms they’re still wildly successful. They seem to have not lost much trust among buyers in the long term.

That's fine. I'm not on a personal crusade punching them. At company I work for we have had different solutions when the incident happened and it seems that was smart move.

Also they got hit with the most recent supply chain attacks on NPM. They aren’t exactly winning the security game.

If you're interested, I was on a business trip and couldn't get on the plane when the bug happened and all flights were cancelled. Almost had to sleep on the street, since most hotels had electronic booking which also went down. Finally managed to get a shack on the edge of town ran by an old couple who probably never used computers much before.

Similar happened to me. It's ridiculous to make the claim that a business should be able to make avoidable errors that ruin lives and disrupt societies, and we should pretend that they are worthy of reconsideration without having learned or proven that they've learnt from such a credibility ending cowboy move.

CrowdStrike is also the company behind Russiagate.

In some circles, it’s considered that they were not completely honest actors, to say the least. My understanding is that the FBI didn’t directly seize the DNC’s physical servers; instead, they relied on CrowdStrike’s forensic images and reports. This is unusual and they could have withhold evidence that didn’t fit “the narrative”, being that Donald Trump is a Russian asset.

To ELI5 what could be implied here, they will say whatever the intelligence agencies and the deep state want them to say, creating negative coverage about Chinese technology is kind of their MO. Allegedly.

But as I’m reading the other comments, they have quite a lot of notorious f ups, so I could be wrong.

These are serious allegations. Can you show evidence of any malfeasance?

These are not my allegations, I’m responding to a question “Sorry, what exactly is the implication here?”. Check the thread.

Thanks. I missed some context earlier.

I would still love to see some sort of source for the allegations. It sort of smells like the evidence didn't come out the way some people hoped so they blamed the investigators. Thats fair, if there's evidence to support the stance.

It is unproven that Trump is literally a Russian spy although that was not at the time even asserted. The entire issue was that Trump's campaign met with literal Russian spies at a time when Trump was in fact in the building although not verifiably at said meeting. The Russians received data useful insofar as targeting the American people with disinfo.

Subsequently Trump called for the Russians to attack the Democrats. They did. They also appear to have targeted the American people with disinfo which could have been aided by the data supplied to them. Ultimately Trump's position towards Russia has evolved into an uncharacteristically and uniquely favorable position for an American president.

If he isn't an actual asset he certainly at least collaborated and communicated with them as a fellow traveler with similar aims at odds with the actual geopolitical aims of America as a nation.

It's probably referring to CrowdStrike's role in the "Russia Gate".

If you look back at the discussions of the bug, there were voices saying how stupidly dysfunctional that company is...

Maybe there's been reform, but since we live in the era of enshittification, assuming they're still a fucking mess is probably safe...

If something makes China (or Iran or Russia or North Korea or Cuba etc) look bad, it doesn't need further backing in the media.

This list of specific examples exists in your head solely because of backing by the media.

Well, at least it wasn’t:

“Speaking on the condition of anonymity …”

“Discussed the incident on the condition that they not be named …”

“According to people familiar with …”

Very clear example of propaganda passing as journalism.

A huge portion of journalism is in fact reporting what people say. An important part of a certain kind of journalism is investigating and reporting on those claims. Sometimes the facts are opaque but claims can be corroborated in other ways. The clue here is the "other experts." If multiple independent sources are making the same claims, that's newsworthy, even if there's no tangible proof.

Also keep in mind this is not an academic article or even an article for tech folks. It's for general population and most folks would be overwhelmed by details about prompts or methodology.

Multiple 'independent'* sources making up the same shit is known as 'manufactured consent'. Especially if it's at the behest of a regime with an agenda to push.

* Mass media is not and has never been independent. It's at the service of the owning class.

Okay.

I appreciate you bringing up this issue on this highly-provocative claim, but I'm a little confused. Isn't that a pretty solid source...? Obviously it's not as good as a scientific paper, but it's also more than a random blogger or something. Given that most enterprises operate on a closed source model, isn't it reasonable that there wouldn't be methodology provided directly?

In general I agree that this sounds hard to believe, I'm more looking for words from some security experts on why that's such a damning quote to you/y'all.

Nobody trusts anyone or anything anymore. It used to be the fact that this was printed in the Washington Post was sufficient to indicate enough fact checking and background sourcing had been done that the paper was comfortable putting its name on the claims, which was a high enough bar that they were basically trustworthy, but for assorted reasons that’s not true for basically any institution in the country (world?) anymore.

For the average person, being published in WaPo may still be sufficient, but this is a tech related article being discussed on a site full of people who have a much better than average understanding of tech.

Just like how a physicist isn't just going to trust a claim in his expertise, like "Dark Matter found" from just seeing a headline in WaPo/NYT, it's reasonable that people working in tech will be suspicious of this claim without seeing technical details.

> For the average person, being published in WaPo may still be sufficient

I genuinely do not know if this is the case anymore - I really do think we’ve reached a level of epistemological breakdown societally where “God is dead” again for us.

I think it really depends on how 'poisoned' the person is. I can totally believe that my politically-disconnected parents would consider being published in WaPo or NYT to be a strong sign of reliability. It helps that headlines that amount to "China is doing comically evil things again" tend to be taken at face value by many people, just for confirming their own biases, regardless of actual evidence.

Yeah, and that’s my concern right now - I think going back ~10 years or so, the percentage of “poisoned” (and we’ll use that term as in a dataset or something - the percentage of values in this set that have been affected by the contaminant) people was a minority, in the 10-20% range (just throwing out numbers). That meant if the NYT or WaPo published something, as a nation, we could generally debate our values and opinions based on a common set of facts - the credibility of those institutions was high enough that if they asserted, for instance, that Paul Ryan wore a toupee, we’d be arguing whether or not the wearing of a toupee was worth caring about and what the proper response to the toupee was, not whether or not he actually wore a toupee.

My fear right now is the percentage of the population that’s “poisoned” is well over 50% - that more people than not distrust those types of institutions, which is sufficient to mean that we’re no longer arguing as a nation whether toupee-wearing fits into our national ideals or who we want to be as a people, and indeed we cannot have those debates, because for us to discuss our values or positions, they need to be in reference to some shared common set of facts, and there’s not a source of facts shared in common by enough of the population for us to be able to generate any kind of consensus worldview to even debate.

Isn't the goal of disinformation campaigns to create a post truth era?

It's very hard to combat. I hope since HN has an at least above average intelligence userbase and familiarity with the internet that we'd be better at fighting this. I hope we don't give up the fight.

I think some advice I got from another academic about how to serve as a reviewer applies more broadly.

  It's easy to find flaws or critiques in a work. Your job as a reviewer isn't to help authors identify flaws, they are likely already aware. Your job is to determine if their flaws undermine their claims, even if their claims are accurate it's insufficient if not properly evidenced.
The point is that nothing is perfect. So the real question is if we're making progress to finding truth or if we're just being lazy or overly perfectionist. Or Feynman said something similar. (Not a precise quote) "the first rule is not to be fooled and you're the easiest person for you to fool"

> Isn't the goal of disinformation campaigns to create a post truth era?

I dunno, and I'm not sure if you are including the major newspapers on the campaigner or victim group... but it would help if they weren't caught in blatant lies all the time.

Gell-Mann amnesia stops working once people hear about the concept.

Anyway, if the NYT published something on the lines of "public person X says Y in public", that would have high odds of being true. But "cybersecurity issue X identified in country-the-us-doesn't-like-Y" is almost certainly bullshit and even if there is something there, the journalist doesn't know enough to get the story right.

It was a rhetorical question. I actually would really encourage you to read about post truth politics if you haven't because it ties into what you're discussing.

I am including the major news organizations and I specifically think they're a major contributor to post truth. It can't happen without them. Being caught in lies enables post truth because the point of this strategy is to make it difficult to determine what truth is. To overload the populous. The strategy really comes out of Russia where they specifically would report lies such as Putin killing dissidents, only for those people to turn up alive. You encourage conspiracies. The most recent example I can think of is how Trump going offline for a few days lit the world with conspiracy theories about him dying. Fucking major news networks bought into that too! It's insane to operate like that. But that's the point. That you have to question everything. I guess to put it one way, you need to always be in system 2 thinking. But you can't always be operating at that level and when doing for long periods of time you'll end up with an anxiety disorder.

I don't know if all major news networks are doing this intentionally or if it's a steady state solution optimization for engagement, but the result would be the same.

I'm saying this because look at my main comments. I'm trying to encourage finding the truth of the matter rather than react (which is what the OP was (rightfully) criticizing WaPo for).

  > For the average person, being published in WaPo may still be sufficient, but this is a tech related article being discussed on a site full of people who have a much better than average understanding of tech.
I agree but also look at the responses to my comment above and the version in the main thread.

People here aren't responding as techies, regardless of them being techies or not. I'm asking for help demonstrating or countering the claim but most responses are not responding in a way where we're trying to do this. Most responses are still knee jerk reactions. I understand how people misinterpret my comment as a stronger claim, and that is my bad, but it's also hard to avoid. So I want to agree with you but I also want to make sure *our* actions align with *our* words

I would like to keep HN a techie culture but it's a battle we're losing

But every hole has its expertise. If IT has, other areas would have.

For the last decade or so, there's been a huge, sustained war on expertise, and an effort to undermine the public's trust of experts. Quoting an expert isn't enough for people, anymore. Everyone's skeptical unless you point them to actual research papers, and even then, some people would rather stick to their pre-existing world views and dO tHeIr OwN rEsEaRcH.

Not defending this particular expert or even commenting on whether he is an expert, but as it stands, we have a quote from some company official vs. randos on the internet saying "nah-uh".

> Everyone's skeptical unless you point them to actual research papers, and even then, some people would rather stick to their pre-existing world views and dO tHeIr OwN rEsEaRcH.

I think saying things like "dO tHeIr OwN rEsEaRcH" contributes more to this deep distrust, because "do your own research" means different things to different people. To some people it means "read the same story from multiple sources rather than blindly trusting <whatever>" (which I think is good advice, especially nowadays), while to others it might mean "don't trust anything that anybody says, regardless of their qualifications" (which is bad advice). At a minimum, I think you should clarify what your actual position is, because the mocking way you've phrased it to me heavily implies that your position is the opposite, or "don't do your own research, just trust the experts." Don't forget that for most of history the "experts" were religious leaders. Where would we be today if nobody ever questioned that?

To be clear, when I mock "do your own research," I'm specifically mocking 1. the people who go out there cherrypicking only information that confirms their own preexisting views and 2. those who simply default to being contrarian for the sake of contrariness. Naysayers for the pure sake of naysaying. Both mentalities, I believe, are rooted in a belief that everyone is against you and a desire to be one of the few who Know The Truth That Experts Are Hiding From Us.

What gets more views/attention? Someone saying, "Yea, the consensus opinion makes general sense, although reasonable people can disagree about some details." or someone saying, "Scientists are trying to keep this knowledge away from us, but I know the truth. Keep watching to find out and join our club!"

I'm not asking people to blindly trust experts, but to stop blindly opposing them.

Appreciate the clarification! I think we're in complete agreement then

> there's been a huge, sustained war on expertise, and an effort to undermine the public's trust of experts.

I find your verbiage particularly hilarious considering the amount of media and expert complicity that went into manufacturing the public support for the war on terror.

The media has always been various shades of questionable. It just wasn't possible for the naysayers to get much traction before due to the information and media landscape and how content was disseminated. Now, for better or worse, they laymen can read the bible for themselves, metaphorically speaking.

Fifty four percent of Americans read below the sixth grade level.

They shouldn't be reading anything for themselves and should be trusting the experts, even if those experts are sometimes wrong they will be more accurate than the average American.

Teaching someone to think for themselves, without first teaching them how to think is an invitation to disaster.

You gonna complain that they drink light beer and eat junk food while you're at it?

Only showboating "english language for the sake of it" type use cases need much beyond middle school reading level. News and the like aren't that because they need to reach a mass market. Professional communication needs to reach the ESL crowd and be unambiguous it too isn't that. Even legal literature is very simple. Professional and legal communication just have tons of pointers going all over the place and a high reading level won't help you with that.

People who lack literacy are not just bad readers, they are bad thinkers.

It is fine to be simple, and to live a simple life. That does not mean that your ignorance is as good as an experts knowledge.

Worse, teaching people to think for themselves without first teaching them how to think does not just halt progress, it put's it into full retreat.

Exactly--it's not English language snobbery. It's just that the median person out there is simply not capable of doing a satisfactory depth of research to reach a conclusion about most topics. This is exactly why we have experts who dedicate their lives to understanding niche and complex topics. I consider myself a smart guy, and I know I don't have the time or knowledge to sufficiently research the vast majority of topics.

I agree with you 100%. Most people do not have the time or knowledge to become experts in all the fields they hold opinions in.

However, I actually AM being a bit of a snob as well. I'm proposing the deeply unpopular idea that not every person even has the capability to. It seems to have become a little-known fact that fifty percent of people are of below the median intelligence.

A lot of people are reluctant to admit that to themselves. They shouldn't be... It's an enormous relief when you finally realize that you don't have to have an opinion on everything.

You make it sound like the newspapers/companies are un-culpable for that effect. I believe it to be the case because I've seen cases were a newspaper presents a narrative as fact when those involved know very well it's just someone's spin for their own benefit. See <https://en.wikipedia.org/wiki/Gell-Mann_amnesia_effect>.

It's been a failure from both sides, attack on expertise and education from regressive elements, media abusing 'experts say' to produce all sorts of clickbait, experts choosing political/PR/convenience over honesty/sincerity and people who are not experts claiming to be experts (the situation here, or where they ask a 'smart guy' like a pop-physicist to talk about something they aren't actually an expert in)

I mean, you are effectively defending this particular expert, with your insinuation that the public should be more trusting of people framed as experts like this. As someone moderately knowledgeable in this area and moderately skeptical of CrowdStrike, the claim a priori seems far fetched to me. You can't say there's a war on expertise and then turn around and say "whether or not the person portrayed by this WaPo article as an expert is an expert or is correct...".

The problem with expertise is anyone can be an expert. I would challenge the integrity of anyone claiming any field has precisely zero idiots.

I don't feel they can be trusted on tech reports since 7 years ago, Bloomberg "The Big Hack".

  > Nobody trusts anyone or anything anymore.
Which, btw, is the goal of most disinformation campaigns. To create a post truth era.

I'll say it's ironic that the strategy comes out of Russia because there's an old Russian saying (often misattributed to Reagan) that's a good defense: trust but verify

And yet, I suspect if you look at the publications of "reliable" institutions in the 1980s, you'd find far more ridiculous things than you'd ever see in the modern era.

For one, half the things I see from that era had so much to gain from exaggerating the might and power of the Soviet Union. It's easy to dig up quotes and reports denying any sort of stagnation (and far worse - claiming economic growth higher than the west) as late as Andropov and Chernenko's premierships.

The Washington Post was always bad. Movement liberals just fell in love with it because they hated Trump. Always a awful, militaristic, working-class hating neocon propaganda rag that gleefully mixed editorial and news, the only thing that got worse with the Bezos acquisition were the headlines (and, of course, the coverage of Amazon.) The Wall Street Journal was more truthful, and actually cared about not dipping their opinions in their reporting. I could swear there's a Chomsky quote about that.

People put their names on it because it got them better jobs as propagandists elsewhere and they could sell their stupid books. It's a lot easier to tell the truth than to lie well; that's where the money and talent is at.

I'm way more confused why you think a company that makes its living on selling protection from threats, making such a bold claim with so little evidence is a good source.

Compare this to the current NPM situation where a security provider is providing detailed breakdowns of events that do benefit them, but are so detailed that it's easy to separate their own interests from the attack.

This reminds me of Databrick's CTO co-authoring a flimsy paper on how GPT-4 was degrading ... right as they were making a push for finetuning.

The person you replied to says there was no methodology. This is standard for mainstream media, along with no links to papers. If it gets reported in a specialist journal with detail I'll take it more seriously.

>Isn't that a pretty solid source...?

What, CrowdStrike?

Not sure why downvoted. Good journalism here would have been to show the methodology behind the findings or produce a link to a paper. Any article that says "Coffee is bad for you", as an example, that doesn't link to an actual paper or describes the methodology cannot be critically taken at face value. Same thing with this one. Appeal to authority isn't a good way to make a conclusion.

I'm not even gonna ask them to explain the methodology but it's 20-goddamn-25, link your source so that those who want to dig through that stuff can.

Washington Post is in what many characterize as a slow roll dismantling for having upset investors.

Per Wikipedia, WaPo is wholly owned by Bezos' Nash Holdings LLC. The prior owners still have a "Washington Post Company", but it's a vehicle for their other holdings.

Yes yes, I guess I was counting owner as investor

And they're free to destroy their investments.

It's WaPo, what do you expect. Western media is completely nuts since Trump & COVID.