> Something I don't think is well understood on HN is how driven by ideals many folks at Anthropic are

After 20 years of everyone in this industry saying "we want to make the world a better place" and doing the opposite, the problem here is not really related to people's "understanding".

And before the default answer kicks in: this is not cynicism. Plenty of folks here on HN and elsewhere legitimately believe that it's possible to do good with tech. But a billion dollar behemoth with great PR isn't that.

Exactly. At this level you don't just put out a statement of your personal opinion. This is run through PR and coordinated with the investors. Otherwise the CEO finds himself on the street by tomorrow. Whatever their motives are, it is aligned with VC, because if it is not then the next day there is another CEO. As the parent stated, this is not cynicism. I see this just rather factual, it is simply the laws of money.

I am suspicious the whole thing is a PR stunt to build public trust.

In none of their statements do they say they won't do the things:

> we cannot in good conscience accede to their request.

That's very specifically worded to not say "under no circumstances will we do this".

> Two such use cases have never been included in our contracts with the Department of War, and we believe they should not be included now

Is not saying they won't eventually be included.

They've left themselves a backtrack, and with the care there this statement has been crafted, that's surely deliberate.

This. This is a public misdirection. They already signed a new deal. It may be to their disliking but nothing in the statement prevents them from moving forward.

That is speculation. You might be correct but this statement could simply be a strong signal to the administration to back down. A hail Mary.

Isn't that what we're all doing in this thread? We could certainly take the document at face value but as a parent commenter said, almost every company starts off with "don't be evil" then goes and does evil things.

Is anthropic different? Maybe. But personally I don't see any indication to give them the benefit of the doubt.

> ... to back down.

Or else what?

> They've left themselves a backtrack, and with the care there this statement has been crafted, that's surely deliberate.

What's worse, someone in their PR department will read this thread and be disappointed that the spin didn't work.

I mean that’s just adulthood.

There are outcomes where the US government seizes the company. Not super likely, not impossible.

It would be naive to write a statement that a future event will never happen, under any circumstances. People who make that mistake get lambasted for hypocrisy when unforeseen circumstances arise.

I see recognition that making absolute statements about the future is best left to zealots and prophets. Which to me speaks of maturity, not duplicity.

> There are outcomes where the US government seizes the company. Not super likely, not impossible.

Are there historical examples in the US specifically where we've nationalized a business?

Because we've certainly invaded countries and assassinated leaders over exactly the same.

ETA: I could have answered my own question with two minutes of research. Yes, we have: https://thenextsystem.org/history-of-nationalization-in-the-...

I'm not sure why you are getting downvoted.

It is indeed a naive, or more likely a dishonest thing to do.

Anyone can promise anything. When there's little to no accountability and public memory/opinion doesn't last a week (or is easily manipulated anyway), then promises mean literally nothing. Very like how, in politics, temporary means permanent.

Or HackerNews itself, with them implementing a little Big Brother. It will, of course, absolutely and without a doubt only "nudge" people and it will absolutely, under no circumtances, pinky promise, never get any worse or do anything else but that.

When there's millions of fools, then those, who actually recognize that they are being fooled, are rarely ever significant in numbers. They're drowned out by the fools, until said fools "wake up" and cry "if only we had known!".

Well ... you could have known, but in your mindlessness you didn't listen and think.

"It must be true, because they say so. D'uh. What are you, dumb?"

This. I don't get why you are getting downvoted. The statement literally says:

  Two such use cases have never been included in our contracts with the Department of War, and we believe they should not be included now:
Last word is very important: "now".

I'm not saying whether or not they're planning to back down, but this sentence doesn't imply that. The "now" is clearly meant to be in reference to the fact they've not in the past.

Being a tech forum centered around VC funding means we have a TON of tech bros (derogatory) here, who believe in nothing beyond getting their own piles of money for doing literally anything they can be paid to do. If you offered these guys $20 to murder a grandmother they'd ask if they have to cover the cost of the murder weapon or if that's provided.

I get it to a degree, people gotta eat, and especially right now the market is awful and, not to mention, most hyperscaler businesses have been psychologically obliterating people for a decade or more at this point. Why not graduate to doing it with weapons of war too? But, personally, I sleep better at night knowing nothing I've made is helping guide missiles into school busses but that's just me.

[dead]

I share this sentiment.

In general - I don’t know if it’s a coincidence but here on HN for example, I’ve noticed an increasing amount of comments and posts emphasizing the narrative of how “well- intended” Anthropic is.

Feel free to judge them by their actions rather than intentions. This situation being an example.

[deleted]

I'd love to see the financial model that offsets losing your single biggest customer and substantial chunk of your annual revenue with some vague notion of public trust.

This is so short sighted. We are so early into this AI revolution, and this administration is obviously in a tailspin, with the only folk left in charge being the least capable ones we have seen in a decade

Imagine what the conversation would be like if Mattis, a highly decorated and respected leader were still the SecDef. Instead we are seeing bully tactics from a failed cable news pundit who has neither earned nor deserved any respect from the military he represents.

We are two elections and a major health issue away from a complete change of course.

But short sightedness is the name of the quarterly reporting game, so who knows.

> We are so early into this AI revolution…

I keep hoping it’s almost over.

Not trying to be the Luddite. Had multiple questions to AI tools yesterday, and let Claude/Zed do some boilerplate code/pattern rewriting.

I’ve worked in software for 35 years. I’ve seen many new “disruptive” movements come and go (open source, objects, functional, services, containers, aspects, blockchains, etc). I chose to participate in some and not in others. And whether I made the wrong choices or not, I always felt like I could get a clear enough picture of where the bandwagon was going that I could jump in, or hold back, or kind of. My choices weren’t always the same as others, so it’s not like it was obvious to everyone. But the signal felt more deterministic.

With LLM/agents, I find I feel the most unease and uncertainty with how much to lean in, and in what ways to lean in, than I ever have before. A sort of enthusiasm paralysis that is new.

Perhaps it’s just my age.

Didn't we go through this same kind of uncertainty with PCs, the internet, and smartphones? It's early and we're all noodling around.

I'm seriously worried there won't be more elections. Not hyperbole at all.

> I'm seriously worried there won't be more elections. Not hyperbole at all.

Why? That's a an unrealistic fear, driven by the insanely overwrought political rhetoric of 2026. Think about it: elections will be the absolute last thing to go.

If you want something to worry about, worry about this:

> And the stakes of politics are almost always incredibly high. I think they happen to be higher now. And I do think a lot of what is happening in terms of the structure of the system itself is dangerous. I think that the hour is late in many ways. My view is that a lot of people who embrace alarm don’t embrace what I think obviously follows from that alarm, which is the willingness to make strategic and political decisions you find personally discomfiting, even though they are obviously more likely to help you win.

> Taking political positions that’ll make it more likely to win Senate seats in Kansas and Ohio and Missouri. Trying to open your coalition to people you didn’t want it open to before. Running pro-life Democrats.

> And one of my biggest frustrations with many people whose politics I otherwise share is the unwillingness to match the seriousness of your politics to the seriousness of your alarm. I see a Democratic Party that often just wants to do nothing differently, even though it is failing — failing in the most obvious and consequential ways it can possibly fail. (https://www.nytimes.com/2025/09/18/opinion/interesting-times...)

It's not an unrealistic fear. Trump has been making noises about "taking over elections." Abolishing elections wholesale is very unlikely, sure, but a sham election rigged by a corrupt government? That's standard fare for authoritarians. And there's evidence of voting anomalies in swing states in the 2024 election.

https://www.theguardian.com/us-news/2026/feb/27/trump-voting...

https://electiontruthalliance.org/

Yeah, Russia still has "elections" for all the good that does them.

Trump _says_ lots. Most of it doesn't come true.

FYI, even though you have a new account, you were banned from your first comment and all your comments automatically show up as hidden-by-default to most users.

It's not who votes that counts, but who counts the votes.

(Attributed to Stalin, but likely comes from a despot earlier in the history.)

Authoritarian nations continue to have elections, turnout is near 100%, and Dear Leader wins with 90% of the vote.

I don't think it's crazy to worry that, but elections are run by the states, there are over 100,000 poling places nationally, and people are pissed. On Jan 3, the entire current House of Representatives terms end; Democratic governors will still hold elections, and if there haven't been elections in GOP-led states, they're out of representation. There are so many hurdles in the way of the fascists canceling or heavily interfering in elections, and they're all just so stupid.

WaPo headline “Administration plans to declare emergency to federalize election rules.” https://www.washingtonpost.com/politics/2026/02/26/trump-ele...

Yeah, they can plan whatever they want. No such authority exists, and it must really be emphasized that they're all so stupid.

Stupid and effective are not mutually exclusive.

I do agree with you that no such authority exists, but this administration seems to get away with a lot of things they have no authority to do.

[deleted]
[deleted]

If you think they're pissed now, just wait to see how they react to election interference.

I recently read up on how the House of Representatives renews itself and quite frankly it's one of the most beautiful processes I've seen, completely removing the influence of the prior congress.

Putin crushes every election he has. Of course there would be more elections.

Mattis- the same highly decorated and respected leader that was on the board of directors at Theranos... edit: added Mattis

Their whole strategy is that the lack of a legal moat protecting their product is an existential threat to human life. They are the only moral AI and their competitors must be sanctioned and outlawed. At which point they can transition from AI as commodity to “value” based pricing.

It’s not going to work, but I can’t blame Amodei and friends for trying to make themselves trillionaires.

This is why we should be skeptical of companies that want to tie themselves to the military industrial complex in the first place.

$200M is >2% ARR at the last numbers we got from them, and would take them back... checks notes... literally only a few days of ARR growth.

I'd love to see any evidence that this single biggest customer is provably and irreversibly lost on all levels of scrutiny as a result of this attempt at building public trust.

The rest of the world moves to using you?

It absolutely is a PR stunt. And the media is cheering.

It's absurd.

It's simple: If you do not like working with the military, cancel your contract with the military and pay the penalties.

They are explicitly not doing that.

This effectively is cancelling, isn't it?

You're implying cancelling quietly would be better. But the department would just use a different supplier. This seems like the action someone would take if they cared about the issue.

> If you do not like working with the military, ...

Eh? But they do like to work with the military. How else are you going to "defend the United States and other democracies, and to defeat our autocratic adversaries"?

They want to work with the military, with just two additional guardrails.

[dead]

> it is simply the laws of money

The First Law of Money: Money buys the Law.

To quote Brennan Lee Mulligan, "Laws are threats made by the dominant socioeconomic ethnic group in a given nation."

Certainly pre-democracy, other than the ethnic group bit.

The full[1] quote is:

> “Laws are a threat made by the dominant socioeconomic ethnic group in a given nation. It’s just the promise of violence that’s enacted, and the police are basically an occupying army, you know what I mean?”

...Which is funny, but technically speaking, it's (more or less) a paraphrasing/extrapolation of the very serious political science definition of a state, “a monopoly over the legitimate use of violence in a defined territory”

[1] Minus the last line, which I will allow others to discover for themselves

That's maybe the second law. The first one is: money is always finite.

Look at how Elon Musk behaved. Do you think VC gladly approved what he did with Twitter? They might want to keep chasing quarterly results - but sometimes, like with Zukerberg, they can't. Not enough money. Similar examples with Google rounds or how much more financially backed politician loses rather often to a competitor. Or, if you will, Vladimir Putin's idea that he can buy whatever results he wants - and that guy is a very wealthy person. There are always limits, putting the money law to the second place. We might argue that often the existing money is enough... but in more geopolitical, continuum-curving cases there are other powerful forces.

The Twitter acquisition wasn't funded by venture capital, so your question about VC approval doesn't apply.

If you're using VC as a general term for "investor" (inaccurately), then the answer to your question is that the major investors, such as Larry Ellison and the Saudi monarchy, wanted political control of Twitter, which meant that they did (apparently) approve what Musk did with it.

You're missing the point. It matters little where exactly money to pay for acquisition of Twitter came from. What matters is that nobody expected Twitter to lose employees and users in such numbers. So, whoever gave the money, was still limited in ensuring the results are "fully enough" in line with their wishes. Because money is always finite.

FWIW, I don’t actually know if board of Anthropic has actual power to replace its CEO or if Dario has retained some form of personal super-control shares Zuckerberg style.

At some level of growth, the dynamics between competent founders and shareholders flip. Even if the board could afford to replace a CEO, it might not be worth it.

I'd counter that at this level of capital, if the CEO doesn't well align with the capital, then super-control shares will be overpowered by super-lawyers and if there is need some super-donations. OpenAI was a public interest company...

Not at all. Especially at that level of capital. It’s the equity equivalent of „if you owe a bank a million dollars, you’re in trouble. If you owe a bank a billion dollars, the bank is in trouble”.

Capital is extremely fungible. Typically extremely overleveraged. Lawyers are on the other hand extremely overprotective. They won’t generally risk the destruction of capital, even in slam-dunk cases. Vide WeWork.

This is fundamentally incorrect.

Anthropic has an odd voting structure. While the CEO Dario Amodei holds no super-voting shares, there are special shares controlled by a separate council of trustees who aren't answerable to investors and who have the power to replace the Board. So in practice it comes down to personal relationships.

Surely you mean the laws of shareholder capitalism. There are many things you can do with money, and only some of them are legally backed by rules that ensure absolute shareholder power.

> everyone in this industry

So in the last 20 years there is nothing good coming out of the software industry (if this is the industry you mention) ?

I find it somehow ironic, because this type of generalization is for me the same issue that some of the people saying "they want to make a better place" have: accept reality is complex.

There were huge benefits for society from the software industry in the last 20 years. There were (as well!) huge downsides. Around 2000 lots of people were "Microsoft will lock us in forever". 20 years later, the fear "moved" to other things. Imagining that companies can last forever seems misguided. IBM, Intel, Nokia and others were once great and the only ones but ultimately got copied and pushed from the spotlight.

Everyone in this industry making a certain bullshit claim. I did qualify my statement. Don’t cut my words to make a strawman.

Additionally I state in the end that I do believe it’s possible.

So do you know everyone in the industry that made that such a claim? Sure, maybe you meant to restrict it further to "everyone I have noticed personally that said/wrote that" (or anything along the lines), but even then, do you know all the stuff that they did after saying it? (as the statement also included "doing the opposite" which I find quite strong).

If I see "everyone" I would expect it to actually mean "everyone under the constraints", the word "everyone" has a certain meaning and is very powerful, why use for situations where other words like "many", "most" might be more appropriate?

> So do you know everyone in the industry that made that such a claim

Of course, I wouldn't have said so otherwise.

Here's another one: every pedant in this website never adds anything useful to any conversation.

I don't even think both things are contradictory. People that put too much value in their ideals tend to oversee the consequences of such ideals in real life and do wrong without deviating an inch from their ideals.

But is that really the problem in big tech today? To me it looks like sooner or later they cave from their ideals (or leadership changes) and that the reason every time is that they want to make even more money.

I think that's still too rosy a view; it's clear with a lot of big tech that they never had the ideals in the first place. They use claims of principle for marketing purposes and then discard them when it's no longer convenient.

Or, perhaps even more likely, the ideals inevitably get corrupted by access to unthinkable economic power/leverage, like it happened with more or less all other giants with strongly idealistic initial leadership and leadership may actually delude itself into thinking they're still on the right track as a sort of a defense mechanism. Back when they published the article on the Claude-operated mass-scale data breach last year, the conclusions were delivered in a bafflingly casual tone as if it was a weather report: yeah, the world has become a lot more dangerous now (on its own), so you may want to start using Claude for cyber-defense and we are doing our best to help you protect your business. I rolled my eyes at that so hard they popped out of their sockets. Weren't you... the guys... who made it that way and enabled that very attack? Very convenient to sell weapons to both sides, isn't it, not at all like a mafia business. Very responsible and ideal-driven.

Consider also the part that is going unsaid in the address: Amodei is strongly against the use of Claude for mass surveillance of Americans but he says nothing about mass surveillance of anybody else (and, in fact, is proactively giving foreign intelligence a green light in his address) and is deliberately avoiding any discussion on the fact that his relationship with the Pentagon is mediated through the contract with Palantir they signed something like 1.5 years ago. Palantir is a company whose business is literally mass surveillance, by the way! I, too, am so ideal-driven that I willingly make deals with the devil! But now that he's successfully captured the popular sentiment, people are going to consider him the moral champion without bothering to look at these and other glaring contradictions.

Ideals have always been represented in literature as a virtue and a problem for humans. I find real life is no different.

Sure, sooner or later. I don't want to even guess where the new AI companies are on the path that leads to that destination, but right now it looks like Anthropic is not at that stage. Heck, even though a lot of people find Sam Altman slimy, even OpenAI isn't yet at that stage.

I believe that this is classical behaviour of every share holder driven business. You can build on ideals from start, but once you acquire some position, money making is on the menu. Eg. deliberately worsening user experience for better revenue.

Possiblity to turn on heated seats in car you own for a small monthly fee is absurd yet very real. I'm looking forward to enshittification of current AI tools.

Yeah it's not that the people involved have no ideals, it's that the company structure as a whole doesn't, and over time that structure will eventually outlive, corrupt, and/or overpower the ideals of the founders or other principled individuals at the company.

I can’t think of a single thing Meta does that isn’t driven by pure greed.

Yes, though Meta is a bad example as they started off with the values of Zuckerberg, and still have them.

Exactly right. But i think it makes it a good example actually. Company DNA is a thing. Bill Gates isn't running microsoft anymore. Still...

What would be more appropriate example?

Apple, Tesla, Oculus.

The first two are definitely "heroes who lived long enough to be villains"; Oculus is more of an "I recon" due to how it was seen right up until getting bought by Facebook.

Adobe?

But in the stock market, it is almost impossible for companies like Anthropic or any successful startups not to become villains (profit first no matter what). Anthropic especially needs to burn huge amount of money, so they need a lot of funding. The only way to keep founders' idealism is probably to copy Zuckerberg. Divide stocks with and without voting-power and trade only no-voting stocks.

I'm not denying 95% of that, only saying that Zuckerberg didn't have any idealism to lose in the first place.

I actually forgot that his first site was facemash which single purpose was to rate "hotness" of each individual girl on his University.

[dead]

Anthropic is not a public company.

LOL, Palmer Luckey is a right-wing war mongering psychopath.

All of Meta's VR stuff should rationally be cut loose and refunded if it were all about greed. That stuff only survives because Zuck is a nerd who wants it to happen (but it's not going to.)

Oh sure. I don't want to say everybody are driven by ideals and not greed, but that even people with strong ideals and good intentions can do a lot of bad by being blinded by those same ideals.

I think most people are conscious that, irrespective of a founders vision, company morals usually don't survive the MBA-inisation phase of a company's growth.

Depends. Many still reflect the founders vision; even if that vision might have evolved over time.

Can you provide an example of that for an American venture backed corporation older than a decade?

Not the person you're replying to, and I may be wrong about this, but Amazon?

Jeff's original vision was "relentless customer focus" and ...

actually on second thought I'm seeing the argument 'Amazon stopped caring about customers and is in full enshittification mode at this point'.

But maybe Amazon circa ~2010/2015, or Google around 2010 was still pretty close to the original vision of customer service/organizing the world's information.

Or Apple? They're still making nice computers, although not sure they count as VC backed.

Stripe perhaps? Hashicorp?

Well Google‘s vision was to catalog all the world’s data

Apple wanted to make personal computing stable - they were absolutely VC backed

I suppose the original question is vague enough that it could always encompass everything which is founders vision even if the vision changes so it’s like OK well then then there’s nothing really to say that you’re stable too it’s just some whatever the function of the person who started the organization is and even that you could debate

True. Which is all the more reason for calling bullshit on claims of "doing good" or "having ideals" by anyone building a company that can eventually be ran my MBAs.

The impact of MBAs might be decreasing..

Exactly. I'd love to believe that at Anthropic, idealism trumps money. But Google was once idealistic too. OpenAI was too. It's really hard to resist the pull of money. Especially if you're a for-profit corporation, but OpenAI wasn't even that at first.

> not related to people's "understanding".

Except for the understanding that it's foolish to believe anything that sounds too good to be true. Yes, believing that people who want to make money/achieve positions of power, also want to make the world a better place, is absolutely foolish. Ridiculously foolish.

Reminds me of Effective Altruism and the collective results of people claiming to believe in that virtue.

I don't think it's cynical to acknowledge the pattern that publicly owned companies will eventually cave to the desires of their shareholders.

I understand Anthropic is not public, but I assume there's an IPO coming.

I don't think it's cynical to believe that a company can make the world a worse place, or that Anthropic as a company will make many horrible choices.

I do think it's cynical to believe that people, and groups of people, can't be motivated by more than money.

This is a component for sure, but also think of why Anthropic was born. It exists because of disagreements with OpenAI on the values of AI safety and principles.

At some point I've wondered if "fiduciary duty", when pushed to highest corporate levels, always conflicts with "make the world a better place"

i.e. Fiduciary Duty Considered Harmful

[deleted]

Cynicism is the newspeak substitute for sincerity, no need to worry about being called a cynic in this post-truth world of snowflakes.

and that's okay. so we judge them one decision at a time. So far, Anthropic is good in my book.

> Plenty of folks here on HN and elsewhere legitimately believe that it's possible to do good with tech. But a billion dollar behemoth with great PR isn't that.

To expand on that a bit, many of us (myself included) fully believe founders set out with lofty and good goals when organizations are small. Scale is power, and power corrupts. It's as simple as that. It's an exceptionally rare quality to resist that corruption, and everyone has a breaking point. We understand humans because we are humans, and we understand that large organizations, especially corporations, are fundamentally incapable of acting morally (in fact corporations are inherently amoral).