I don't trust the statement of Anthropic too much. In the past they have done things like
- https://the-decoder.com/anthropics-head-of-safeguards-resear...
- https://the-decoder.com/anthropics-ceo-admits-compromising-w... (see also https://news.ycombinator.com/item?id=44651971, https://futurism.com/leaked-messages-ceo-anthropic-dictators)
- https://the-decoder.com/anthropic-ceo-dario-amodei-backs-pre...
The quotes from those articles (short passages?) are
> He recalls meeting President Trump at an AI and energy summit in Pennsylvania, "where he and I had a good conversation about US leadership in AI,"
> "Unfortunately, I think 'No bad person should ever benefit from our success' is a pretty difficult principle to run a business on... This is a real downside and I'm not thrilled about it."
> "Throughout my time here, I've repeatedly seen how hard it is to truly let our values govern our actions. I've seen this within myself, within the organization, where we constantly face pressures to set aside what matters most, and throughout broader society too." (from a researcher at Anthropic)
I don't think that any of this is particularly damning. Even if you don't like the president, I don't think it's bad to say that you had a good conversation with them. I believe the CEO of NVIDIA has said similar. The Saudis invest in many public US companies, does that make those companies less trust worthy? What about taking private capital from institutions such as State Street and Blackrock? The last quote seems like more of a reflection than an allegation. It read to me as a desire to do better.
I'm all for not trusting companies, but Anthropic seems to be one of the few that's trying to do good. I think we've seen a lot worse from many of their competitors.
The problem is this:
> The Saudis invest in many public US companies, does that make those companies less trust worthy?
It does. If Anthropic takes money from the middle east that might be the reason, why they cannot work for the Pentagon. Simply because the Pentagon works together with the Israeli Forces and middle east investors might not like this. So Anthropic has to decide to either take a lot of money from the middle east, or work for the Pentagon.
Of course the problem goes much deeper than just Anthropic. I don't understand why taking money from dictatorships doesn't count as money laundering in our society. Because basically this is dirty money, generated by slavery and forceful suppression of people. We should forbid all companies to take this kind of dirty money. But because we don't do that at the moment companies who don't take this dirty money will have a disadvantage against companies that do. And because companies are all about money, in the end they are basically forced to act against their good intentions, just to survive.
We as society have to stop this. We must make sure, that companies who are not taking dirty money survive the competition. My idea would be to extend the rules for money laundering to all countries that are dictatorships. But there might be other ideas, to level the playing field between companies, so we as society can help them to make the right decision.
X/xAI has received billions in investment from the royal families of Saudi Arabia, UAE, and Qatar.
Who hasn't taken money from the Middle East?
So do you check the ownership of every public company you might interact with?
Maybe not and maybe you shouldn't. But I feel like the real story here isn't what Anthropic is saying, but that while Anthropic seems to be bending over backwards to give the Defense Department exactly what they need, defining two of the most reasonable red lines that most American would agree with and are already likely illegal, Pete Hegseth in return is threatening the continued existence of their company.
So let's see what happens tonight at 5:01PM but Anthropic isn't really the story here.
I read the articles. As far as factual reporting, I will tentatively take them at face value. But in terms of their editorializing, it is frankly weak by my standards. It would not survive scrutiny in a freshman philosophy class.
Ethics is complicated. I’m not saying this means it can’t be reasoned about and discussed. It can! But the sources you’ve cited have shown themselves to be rather shallow.
I encourage everyone to write out your ethical model and put yourself in their shoes and think about how you would weigh the factors.
There is no free lunch. For many practical decisions with high stakes, many reasonable decisions from one POV could be argued against from another. It is the synthesis that matters the most. Among those articles, I don’t see great minds doing their best work. (The constraints of their medium and funding model are a big problem I think.)
Read Brian Christian’s “The Alignment Problem”’s take on predictive policing if you want a specific example of what I mean. There are actually mathematical impossibilities at play when it comes to common sense, ethical reasoning.
Common sense ethical reasoning has never been very good at new or complicated situations. “Common sense” at its worst is often a rhetorical technique used to shut down careful thinking. At its best, it can drive us to pay attention to our conscience and to synthesize.
I suggest finding better discussions and/or allocating the time yourself to think through it. My preferred sources for AI and ethics discussions are highly curated. I don’t “trust” any of them absolutely. * They are all grist for the mill.
I get better grist from LessWrong than HN 99% of the time. I discuss here to make sure I have a sense of what more “mainstream” people are discussing. HN lags the quality of LW — and will probably never catch up — but it does move in that direction usually over time. I’m not criticizing individuals here; I’m commenting on culture.
Please don’t confuse what I’m saying as pure subjectivity. One could conduct scientific experiments about the quality of discussions of a particular forum in many senses. Which places are drawing upon better information? Which are synthesizing it more carefully? Which drill down into detail? Which participants have allocated more to think clearly? Which strive to make predictions? Which prioritize hot takes? Which prioritize mutual understanding?
It isn’t even close.
Opinions and the Overton window are moving pretty rapidly, compared to even one year ago.
* I’ve written several comments about viewing trust as a triple (who, what, why). This isn’t my idea: I stole it.
I understand you are criticizing their editorializing, but can't tell if you agree with the conclusions or not. Care to editorialize yourself?
When someone says something that I think is poorly framed, I often reframe it and speak to that instead. (Lots of people do this, even if they don’t realize it. I’m aware that I do, for better and worse, and I still prefer it; I think it is more authentic. I think some of the best ways we can enrich other people’s lives is by sharing different ways of processing the world. Lots of people get locked into pretty uninteresting narratives.)
So reframe I did. (I don’t think those articles you cited are worth any more attention than I’ve already given them.)
My most blunt editorializing would be this: most people would be better grounded if they read AI alignment and safety books by Stuart Russell, Nick Bostrom, Brian Christian, Eliezer Yudkowsky, and Nate Soares. If you’ve read others that you recommend, please let me know. I’ve read many that I don’t usually recommend.
As far as long form articles, I recommend Paul Christiano, Zvi Moshowitz, as well as anyone with the fortitude to make predictions while sharing their models (like the AI 2027 crew).
I recommend browsing “Best of Year Y” (or whatever they are called) articles on the AI Alignment Forum and LessWrong. They are my go-tos for smart & informed writing on AI. For posts that have more than say 100 votes, the quality bar is tremendously higher than almost anywhere else I’ve seen, including mainstream sources with great reputations.
In conclusion, I would rather point to interesting people to read and places to engage.