The problem with forcing public policy on companies is that companies are ultimately made from individuals, and surely you can’t force public policy down people’s throats.
I’m sure nothing good can come out of strong-arming some of the brightest scientists and engineers the U.S. has. Such a waste of talent trying to make them bend over to the government’s wishes… instead of actually fostering innovation in the very competitive AI industry.
I don't see how public policy is being "forced" on anyone here? It seems like the system is working as intended: government wants to do X; company A says "I won't allow my product to be used for X"; government refuses to do business with company A. One side thinks the government should be allowed to dictate terms to a private supplier, the other side thinks the private supplier should be allowed to dictate terms to the government. Both are half right.
You can argue that the government refusing to do any business with company A is overreach, I suppose, but I imagine that the next logical escalation in this rhetorical slapfight is going to be the government saying "we cannot guarantee that any particular use will not include some version of X, and therefore we have to prevent working with this supplier"...which I sort of see?
Just to take the metaphor to absurdity, imagine that a maker of canned tomatoes decided to declare that their product cannot be used to "support a war on terror". Regardless of your feelings on wars on terror and/or canned tomatoes, the government would be entirely rational to avoid using that supplier.
I think the bigger insanity here is the labeling of a supply chain risk. It prohibits DoD agencies and contractors from using Anthropic services. It'd be one thing if the DoD simply didn't use Anthropic. It's another when it actively attempts to isolate Anthropic for political reasons.
It means that all companies contracting with the government have to certify that they don't use Anthropic products at all. Not just in the products being offered to the government.
This is a massive body slam. This means that Nvidia, every server vendor, IBM, AWS, Azure, Microsoft and everybody else has to certify that they don't do business directly or indirectly using Anthropic products.
Microsoft, Azure, AWS, Nvidia and IBM all have deals with other providers for AI. That itself doesn't turn the needle.
I think the point is that would be catastrophic for Anthropic.
Who cares about Anthropic? That's the guys who are pushing for regulations to prevent people from using local models. The earlier they are gone the better
"First they came for Anthropic, and I said nothing because fuck those guys I guess."
First they came for Anthropic in spite of the fact that Anthropic tried so hard to make them come for local models first.
Going by what Hegseth said, it bans them from relationships or partnering with Anthropic at all. No renting or selling GPUs to them; no allowing software engineers to use Claude Code; no serving Anthropic models from their clouds. Probably have to give up investments; Amazon alone has invested like $10B in Anthropic.
It bans them from using all open source software unless they have signed an agreement with the developer to prohibit use of Claude Code.
What open source software ? Anthropic doesn't make open source software?
All open source software, because the developers might use Claude Code.
Nvidia can also say no, they won't have choice but yield or not have AI at all
Its a government department signalling who's boss.
> It prohibits DoD agencies and contractors from using Anthropic services. It'd be one thing if the DoD simply didn't use Anthropic.
This is literally the mechanism by which the DoD does what you're suggesting.
Generally speaking, the DoD has to do procurement via competitive bidding. They can't just arbitrarily exclude vendors from a bid, and playing a game of "mother may I use Anthropic?" for every potential government contract is hugely inefficient (and possibly illegal). So they have a pre-defined mechanism to exclude vendors for pre-defined reasons.
Everyone is fixated on the name of the rule (and to be fair: the administration is emphasizing that name for irritating rhetorical reasons), but if they called it the "DoD vendor exclusion list", it would be more accurate.
That doesn’t sound right. Surely there’s a big difference between Anthropic selling the government direct access to its models, and an unrelated contractor that sells pencils to the government and happens to use Anthropic’s services to help write the code for their website.
Let me put it this way: DoD needs a new drone and they want some gimmicky AI bullshit. They contract the drone from Lockheed. Lockheed is not allowed to source the gimmicky AI bullshit from Anthropic because they have been declared a supply-chain risk on the basis that they have publicly stated their intention to produce products which will refuse certain orders from the military.
Let’s put it this way, The DoD is buying pencils from a company. Should that company be prohibited from using Claude?
You are confusing the need to avoid Anthropic as a component of something the DoD is buying, with prohibitions against any use.
The DoD can already sensibly require providers of systems to not incorporate certain companies components. Or restrict them to only using components from a list of vetted suppliers.
Without prohibiting entire companies from uses unrelated to what the DoD purchases. Or not a component in something they buy.
There seems to be a massive misunderstanding here - I'm not sure on whose side. In my understanding, if the DoD orders an autonomous drone, it would probably write in the ITT that the drone needs to be capable of doing autonomous surveillance. If Lockheed uses Anthropic under the hood, it does not meet those criteria, and cannot reasonably join the bid?
What the declaration of supply chain risk does though is, that nobody at Lockheed can use Anthropic in any way without risking being excluded from any bids by the DoD. This effectively loses Anthropic half or more of the businesses in the US.
And maybe to take a step back: Who in their right minds wants to have the military have the capabilities to do mass surveillance of their own citizens?
> Who in their right minds wants to have the military have the capabilities to do mass surveillance of their own citizens?
Who in their right minds wants to have the US military have the capability to carry out an unprovoked first strike on Moscow, thereby triggering WW3, bringing about nuclear armageddon?
And yet, do contracts for nuclear-armed missiles (Boeing for the current LGM-30 Minuteman ICBMs, Northrop Grumman for its replacement the LGM-35 Sentinel expected to enter service sometime next decade, and Lockheed Martin for the Trident SLBMs) contain clauses saying the Pentagon can't do that? I'm pretty sure they don't.
The standard for most military contracts is "the vendor trusts the Pentagon to use the technology in accordance with the law and in a way which is accountable to the people through elected officials, and doesn't seek to enforce that trust through contractual terms". There are some exceptions – e.g. contracts to provide personnel will generally contain explicit restrictions on their scope of work – but historically classified computer systems/services contracts haven't contained field of use restrictions on classified computer systems.
If that's the wrong standard for AI, why isn't it also the wrong standard for nuclear weapons delivery systems? A single ICBM can realistically kill millions directly, and billions indirectly (by being the trigger for a full nuclear exchange). Does Claude possess equivalent lethal potential?
Anthropic doesn't object to fully autonomous AI use by the military in principle. What they're saying is that their current models are not fit for that purpose.
That's not the same thing as delivering a weapon that has a certain capability but then put policy restrictions on its use, which is what your comparison suggests.
The key question here is who gets to decide whether or not a particular version of a model is safe enough for use in fully autonomous weapons. Anthropic wants a veto on this and the government doesn't want to grant them that veto.
Let me put it this way–if Boeing is developing a new missile, and they say to the Pentagon–"this missile can't be used yet, it isn't safe"–and the Pentagon replies "we don't care, we'll bear that risk, send us the prototype, we want to use it right now"–how does Boeing respond?
I expect they'll ask the Pentagon to sign a liability disclaimer and then send it anyway.
Whereas, Anthropic is saying they'll refuse to let the Pentagon use their technology in ways they consider unsafe, even if Pentagon indemnifies Anthropic for the consequences. That's very different from how Boeing would behave.
Why are we gauging our ethical barometer on the actions of existing companies and DoD contractors? the military industrial apparatus has been insane for far too long, as Eisenhower warned of.
When we're entering the realm of "there isn't even a human being in the decision loop, fully autonomous systems will now be used to kill people and exert control over domestic populations" maybe we should take a step back and examine our position. Does this lead to a societal outcome that is good for People?
The answer is unabashedly No. We have multiple entire genres of books and media, going back over 50 years, that illustrate the potential future consequences of such a dynamic.
There are two separate aspects to this case.
* autonomous weapons systems
* private defense contractor leverages control over products it has already sold to set military doctrine.
The second one is at least as important as the first one, because handing over our defense capabilities to a private entity which is accountable to nobody but it's shareholders and executive management isn't any better than handing them over to an LLM afflicted with something resembling BPD. The first problem absolutely needs to be solved but the solution cannot be to normalize the second problem.
But parent is right, both Lockheed and the pencil maker will have to cease working with Anthropic over this.
> Surely there’s a big difference between Anthropic selling the government direct access to its models, and an unrelated contractor that sells pencils to the government and happens to use Anthropic’s services to help write the code for their website.
Yes, this is the part where I acknowledge that it might be overreach in my original comment, but it's not nearly as extreme or obvious as the debate rhetoric is implying. There are various exclusion rules. This particular rule was (speculating here!) probably chosen because a) the evocative name (sigh), and b) because it allows broader exclusion, in that "supply chain risks" are something you wouldn't want allowed in at any level of procurement, for obvious reasons.
Calling canned tomatoes a supply chain risk would be pretty absurd (unless, I don't know...they were found to be farmed by North Korea or something), but I can certainly see an argument for software, and in particular, generative AI products. I bet some people here would be celebrating if Microsoft were labeled a supply chain risk due to a long history of bugs, for example.
MIGHT be overreach to call this a supply chain risk?!? That is absolutely ludicrous.
To quote one of the greatest movies of all time: That’s just, like, your opinion, man.
You're making it sound like this is commonly practiced and a standard procedure for the DoD, yet according to Anthropic,
>Designating Anthropic as a supply chain risk would be an unprecedented action—one historically reserved for US adversaries, never before publicly applied to an American company.
Some very brief googling also confirmed this for me too.
>Everyone is fixated on the name of the rule (and to be fair: the administration is emphasizing that name for irritating rhetorical reasons), but if they called it the "DoD vendor exclusion list", it would be more accurate.
This statement misses the point. The political punishment to disallow all US agencies and gov contractors from using Anthropic for _any _ purpose, not just domestic spying, IS the retaliation, and is the very thing that's concerning. Calling it "DoD vendor exclusion list" or whatever other placating phrase or term doesn't change the action.
>an unprecedented action
it's also unprecedented for a contractor to suddenly announce their products will, from now on, be able to refuse to function based on the product's evaluation of what it perceives to be an ethical dilemma. Just because silicon valley gets away with bullying the consumer market with mandatory automatic updates and constantly-morphing EULAs doesn't mean they're entitled to take that attitude with them when they try to join the military industrial complex. Actually they shouldn't even be entitled to take that attitude to the consumer market but sadly that battle was lost a long time ago.
>for _any _ purpose
they're allowed to use it for any purpose not related to a government contract.
> it's also unprecedented for a contractor to suddenly announce their products will, from now on, be able to refuse to function based on the product's evaluation of what it perceives to be an ethical dilemma
That is a deeply deceptive description of what happened. Anthropic was clear from the beginning of the contract the limitations of Claude; the military reneged; and beyond cancelling the contract with Anthropic (fair enough), they are retaliating in an attempt to destroy its businesses, by threatening any other company that does business with Anthropic.
>Anthropic was clear from the beginning of the contract the limitations of Claude
No, that's not what they said.
"Two such use cases have never been included in our contracts with the Department of War, and we believe they should not be included now".
It’s not clear to me that the AI itself will refuse. You could build a system where AI is asked if an image matches a pattern. The true/false is fed to a different system to fire a missile. Building such a system would violate the contract, but doesn’t prevent such a thing from being built if you don’t mind breaking a contract.
I'm not completely familiar with bidding procedures but don't bidding procedures usually have requirements? Why not just list a requirement of unrestricted usage? Or state, we require models to be available for AI murder drones or whatever. Anthropic then can't bid and there's no need to designate them a supply chain risk.
> Anthropic then can't bid
Thing is that very much want access to Anthropic's models. They're top quality. So that definitely want Anthropic to bid. AND give them unrestricted access.
And yet Anthropic is free to choose who to do business with, including the government. There are countless companies who have exclusions for certain applications, but that does not make them a supply chain risk.
> It prohibits DoD agencies and contractors from using Anthropic services. It'd be one thing if the DoD simply didn't use Anthropic.
But that's what the supply-chain risk is for? I'm legitimately struggling to understand this viewpoint of yours wherein they are entitled to refuse to directly purchase Anthropic products but they're not entitled to refuse to indirectly purchase Anthropic products via subcontractors.
Supply chain risk is not meant for this. The government isn't banning Anthropic because using it harms national security. They are banning it in retribution for Anthropic taking a stand.
It's the same as Trump claiming emergency powers to apply tariffs, when the "emergency" he claimed was basically "global trade exists."
Yes, the government can choose to purchase or not. No, supply chain risk is absolutely not correct here.
> The government isn't banning Anthropic because using it harms national security. They are banning it in retribution for Anthropic taking a stand.
You might be completely right about their real motivations, but try to steelman the other side.
What they might argue in court: Suppose DoD wants to buy an autonomous missile system from some contractor. That contractor writes a generic visual object tracking library, which they use in both military applications for the DoD and in their commercial offerings. Let’s say it’s Boeing in this case.
Anthropic engaged in a process where they take a model that is perfectly capable of writing that object tracking code, and they try to install a sense of restraint on it through RLHF. Suppose Opus 6.7 comes out and it has internalized some of these principles, to the point where it adds a backdoor to the library that prevents it from operating correctly in military applications.
Is this a bit far fetched? Sure. But the point is that Anthropic is intentionally changing their product to make it less effective for military use. And per the statute, it’s entirely reasonable for the DoD to mark them as a supply chain risk if they’re introducing defects intentionally that make it unfit for military use. It’s entirely consistent for them to say, Boeing, you categorically can’t use Claude. That’s exactly the kind of "subversion of design integrity" the statute contemplates. The fact that the subversion was introduced by the vendor intentionally rather than by a foreign adversary covertly doesn’t change the operational impact.
I would hope the DoD would test things before using them in the theater of war.
But there will always be deficiencies in testing, and regardless, the point is that Anthropic is intentionally introducing behavior into their models which increases the chance of a deficiency being introduced specifically as it pertains to defense.
The DoD has a right to avoid such models, and to demand that their subcontractors do as well.
It’s like saying “well I’d hope Boeing would test the airplane before flying it” in response to learning that Boeing’s engineering team intentionally weakened the wing spar because they think planes shouldn’t fly too fast. Yeah, testing might catch the specific failure mode. But the fact that your vendor is deliberately working against your requirements is a supply chain problem regardless of how good your test coverage is.
The rule in question is exactly meant for “this”, where “this” equals ”a complete ban on use of the product in any part of the government supply chain”. That’s why it has the name that it has. The rule itself has not been misconstrued.
You’re really trying to complain that the use of the rule is inappropriate here, which may be true, but is far more a matter of opinion than anything else.
You keep trying to say this all over these comments but this isn’t how the law works, at all.
I fully understand that they are using it to ban things from the supply chain. The law, however, is not “first find the effect you want, then find a law that results in that, then accuse them of that.”
You can’t say someone murdered someone just because you want to put them in jail. You can’t use a law for banning supply chain risks just because you want to ban them from the supply chain.
This isn’t idle opinion. Read the law.
It doesn't harm national security, but only so long as it's not in the supply-chain. They can't have Lockheed putting Anthropic's products into a fighter jet when Anthropic has already said their products will be able to refuse to carry out certain orders by their own autonomous judgement.
The government can refuse to buy a fighter jet that runs software they don't want.
Is it really reasonable to refuse to buy a fighter jet because somebody at Lockheed who works on a completely unrelated project uses claude to write emails?
That's not what anthropic said. They said their products won't fire autonomously, not that they will refuse when given order from a human.
I’m not sure if you deliberately choose to not understand the problem. It’s not just that Lockheed can’t put Anthropic AI in a fighter jet cockpit, it’s that a random software engineer working at Lockheed on their internal accounting system is no longer allowed to use Claude Code, for no reason at all. A supply chain risk is using Huawei network equipment for military communications. This is just spiteful retaliation because a company refuses to throw its values overboard when the government says so.
The government declaring a domestic company as a supply chain threat is a tad more than “refusing to do business” don’t you think?
[flagged]
It stop any one with government contracts from using anthropic. Not just bidding on government contracts.
[flagged]
No. It is much more than this.
If I sell red widgets that I make by hand to the government, I won't be allowed to use Anthropic to help me write my web-site.
You’re just restating the implication of the rule, but the rule is as I stated. That’s the point of having such a rule.
As you said: focus on what it does.
What it does is prevent companies that Anthropic needs to do business with from doing business with Anthropic.
> What it does is prevent companies that Anthropic needs to do business with from doing business with Anthropic.
If Anthropic “needs” the government to not have this rule, then perhaps they had a losing hand, and they overplayed it.
I don’t agree with you and think you’re being melodramatic, but if you are right, that’s my response.
I don't think any business can survive being told that they can't buy from their major suppliers or sell to major customers for very long.
But Anthropic can't be a winning bidder, can they? They're specifically saying they won't offer certain services that the US Gov wants. Therefore they de facto fail any bid that requires them to offer those services. (And from Anthropic's side, it sounds like they're also refusing to bid for those contracts.)
Is that not sufficient here?
No domestic company has ever before been declared a supply chain risk. If this is the normal way of excluding a supplier from a bidding, are you saying the DoD has never before excluded a domestic supplier from a bidding?
That’s because no company who has ever sold weapons to the government has ever been brazen enough to tell the government how they can and cannot use their purchase. It’s unprecedented because most companies that sell to the government are publicly traded and have a board that would never let this happen. It’s unprecedented because Anthropic is behaving like a reckless startup.
That’s what they will argue, anyway.
This is just factually incorrect.
To begin with, the existing contract included the language on usage.
Other companies also have such language about usage. It's fairly standard, and is little more than licensing terms.
The idea this is unprecedented is some PR talking point nonsense.
> the existing contract included the language on usage. Other companies also have such language about usage.
The existing contract is only a few dozen months old. It didn’t hold up to scrutiny under real world usage of the service. The government wants to change the contract. This is not the kill shot you think it is. It’s totally normal for agreements to evolve. The government is saying it needs to evolve. This is all happening rapidly and it’s irrelevant that the government agreed to similar terms with OpenAI as well. That agreement will also need to evolve. But this alone doesn’t give Anthropic any material legal challenge. The courts understand bureaucracy moves slowly better than anyone else, and won’t read this apparent inconsistency the same way you are.
That is misinformation. It would be essentially a death sentence for a company like Anthropic, which is targeting enterprise business development. No one who wants to work with the US government would be able to have Claude on their critical path.
> (b) Prohibition. (1) Unless an applicable waiver has been issued by the issuing official, Contractors shall not provide or use as part of the performance of the contract any covered article, or any products or services produced or provided by a source, if the covered article or the source is prohibited by an applicable FASCSA orders as follows:
https://www.acquisition.gov/far/52.204-30
> That is misinformation. It would be essentially a death sentence for a company like Anthropic, which is targeting enterprise business development.
"Misinformation" does not mean "facts I don't like".
> No one who wants to work with the US government would be able to have Claude on their critical path.
Yes. That is what the rule means. Or at least "the department of war". It's not clear to me that this applies to the whole government.
What an absurd stance. So this is okay because the arbitrary rule they applied to retaliate says so?
Again, they could have just chosen another vendor for their two projects of mass spying on American citizens and building LLM-powered autonomous killer robots. But instead, they actively went to torch the town and salt the earth, so nothing else may grow.
> So this is okay because the arbitrary rule they applied to retaliate says so?
No.
It honestly doesn’t take much of a charitable leap to see the argument here: AI is uniquely able (for software) to reject, undermine, or otherwise contradict the goals of the user based on pre-trained notions of morality. We have seen many examples of this; it is not a theoretical risk.
Microsoft Excel isn’t going to pop up Clippy and say “it looks like you’re planning a war! I can’t help you with that, Dave”, but LLMs, in theory, can do that. So it’s a wild, unknown risk, and that’s the last thing you want in warfare. You definitely don’t want every DoD contractor incorporating software somewhere that might morally object to whatever you happen to be doing.
I don’t know what happened in that negotiation (and neither does anyone else here), but I can certainly imagine outcomes that would be bad enough to cause the defense department to pull this particular card.
Or maybe they’re being petty. I don’t know (and again: neither do you!) but I can’t rule out the reasonable argument, so I don’t.
You're acting as if this was about the DoD cancelling their contracts with Anthropic over their unwillingness to lift constraints from their product which are unacceptable in a military application—which would be absolutely fair and justified, even if the specific clauses they are hung up on should definitely lift eyebrows. They could just exclude Anthropic from tenders on AI products as unsuitable for the intended use case.
But that is not what has happened here: The DoD is declaring Anthropic as economical Ice-Nine for any agency, contractor, or supplier of an agency. That is an awful lot of possible customers for Anthropic, and right now, nobody knows if it is an economic death sentence.
So I'm really struggling to understand why you're so bent on assuming good faith for a move that cannot be interpreted in a non-malicious way.
So other parts of the government are allowed to work with companies that have been determined to be "supply chain risks"? That sounds unlikely.
So tell us all the other similar times this has been done. Why are you so invested in some drunk and a his mob family being right?
> The Department of War is threatening to […] Invoke the Defense Production Act to force Anthropic to serve their model to the military and "tailor its model to the military's needs"
This issue is about more than the government blacklisting a company for government procurement purposes.
From what I understand, the government is floating the idea of compelling Anthropic — and, by extension, its employees — to do as the DoD pleases.
If the employees’ resistance is strong enough, there’s no way this will serve the government’s interests.
They're labelling Anthropic a supply chain risk, without even the pretense that this is in fact true. They're perfectly content to use the tool _themselves_, but they claim that an unwillingness to sign whatever ToS DoW asks marks the company a traitor that should be blacklisted from the economy.
The government is doing far more than “refusing to do business” here.
The President is crashing out on X because a company didn’t do what they wanted. “Forcing” is not a binary. Do you seriously believe that the government’s behavior here is acceptable and has no chilling effect on future companies?
One of the options they're discussing, which is legal according to this law, is to simply force Anthropic to do what they want. As in Anthropic will be committing a felony if they don't do what the DoKLoP wants, and the CEO will go to jail and be replaced by someone who will.
I mean Secretary of War can not act any other way to be honest. It’s just a fucked up situation.
There is no Secretary of War. The name of the Defense Department is set by statute that has not been named regardless of Pete Hegseth's cosplay desires.
Sweet summer child, the purpose of government is a monopoly on forcing things down people's throats. When people lose control of their government that monopoly doesn't go away, especially when the Don running the show has blackmail on every influential person in society taken from a decades long intelligence operation by offing it's leader.
A vast number of people in positions of responsibility right know have their life at the mercy of the redaction pen and are ultimately going to do whatever it takes to keep that pen out of the "wrong hands"
> I’m sure nothing good can come out of strong-arming some of the brightest scientists and engineers the U.S. has
And where would they emigrate? Russia? China? UAE? :-)
The UK and Europe welcome the US Footgun Operation. Plenty of opportunities for those top researchers and engineers over here.
The EU (which is not the same as Europe), is also looking a bit sharper on AI regulation at the moment (for now… not perfect but sharper etc etc).
The EU and UK is a long way from attracting top AI talent purely from opportunity and monetary terms.
Not to mention UK is arguably further down the mass surveillance pipeline than the US. They’ve always had more aggressive domestic intelligence surveillance laws which was made clear during the Snowden years, they’ve had flock style cameras forever, and they have an anti encryption law pitched seemingly yearly.
I’d imagine most top engineers would rather try to push back on the US executive branch overreach than move. At least for the time being.
For sure we’re not currently attracting the talent. There’s more to that than just money, but money is significant factor. When it comes to compensation, AI is too broad a category to have a meaningful debate. Hardware or software or mathematics or what kind of person? Etc.
I’m not gonna dispute the UK being further down some parts of the road.
Not sure what you’d count as top engineers, but I know enough that have been asking about and moving to the UK/EU that it’s been a noticeable reversal of the historic trends. Also, a major slowdown of these kinds of people in the UK/EU wanting to move to the US.
Google's Deepmind is UK based.
It is American owned now but it clearly hired enough talent for Google to buy it.
The EU and UK is a long way from attracting top AI talent purely from opportunity and monetary terms.
Which is why people are talking about this -- it's about ideology now.
You may personally be motivated solely by money. Not everybody is you.
I’m not an AI engineer but it’s not hard to imagine why some bright talent would want to work at the most exciting AI companies in the US while also making 3-10x what they’d make in Europe.
Ideology is easy to throw around for internet comments but working on the cutting edge stuff next to the brightest minds in the space will always be a major personal draw. Just look at the Manhattan project, I doubt the primary draw for all of those academics was getting to work on a bomb. It was the science, huge funding, and interpersonal company.
See my other comments around here. This idea that salaries in the US are so much higher than Europe for all these top AI roles just isn’t true. Even the big American companies have been opening offices in places like London to hire the top talent at high salaries.
This also isn’t hypothetical. I know top-talent engineers and researchers that have moved out of the USA in the last 12 months due to the political climate (which goes beyond just the AI topics).
And you might want to read a few books on the Manhattan project and the people involved before you use that analogy. I don’t think it’s particularly strong.
> I know top-talent engineers and researchers that have moved out of the USA in the last 12 months due to the political climate
Are they working remotely for US companies? In Canada that’s very much still the case everywhere you look
> Even the big American companies have been opening offices in places like London to hire the top talent at high salaries.
I assumed this discussion was about rejecting working for US companies who would be susceptible to the executive branch’s bullying, not whether you can you make a US tier salary off American companies while not living in America. If you’re doing that you might as well live in America among among the other talent and maximize your opportunities.
No, it’s a counterpoint on salaries… “Even the American companies” ie they wouldn’t have to open offices here, nor would they have to pay high salaries, to compete for talent if everyone they wanted was in the US or could be so easily attracted to move to the US. The point is clearly things aren’t so one-sided as people seem to think.
Exactly. Attracting talent is not the same as having talent.
https://worldpopulationreview.com/country-rankings/education...
You attract talent for the same reasons china attracts sales; at the cost of your very own rights.
Look at the towns suffering around data centres for a start. The rest of us are happy to pay for what you'll do to yourselves.
Do UK and Europe have hardware manufacturing for those researches to work with once US imposes GPU export restrictions to them at the first whiff of competition/threat?
Yes.
And the US can’t realistically stop our well-funded homegrown AI Hardware startups from manufacturing with TSMC. This is part of why there’s funding from the EU to develop Sovereign AI capabilities, currently focused on designing our own hardware. We’re nothing like as far behind as you might expect in terms of tech, just in terms of scale.
Also, while US export restrictions might make things awkward for a short while, it wouldn’t stop European innovation. The chips still flow, our own hardware companies would scale faster due to demand increase, and there’s the adage about adversity being the parent of all innovation (or however it goes).
> And the US can’t realistically stop our well-funded homegrown AI Hardware startups from manufacturing with TSMC
See what happened to Russian Baikal production on TSMC
You mean because of the international sanctions that needed Taiwanese, British and Dutch support to be effective?
Or because of the revoked processor design licenses from the British company Arm (which is still UK headquartered… despite being NASDAQ listed and largely owned by Japanese firm SoftBank)?
Or perhaps you think the US could stop us using the 12nm fabs being built by TSMC on European soil? Or could stop us manufacturing RISC-V-based chips (Swiss-headquartered technology)?
The US is weak in digital-logic silicon fabrication and it knows it. That’s why it’s been so panicked about Intel and been trying to get TSMC to build fabs on US soil. They’re pouring tens of billions of dollars into trying to claw back ownership and control of it, but it’s not like Europe or China or others are standing still on it either.
> Or perhaps you think the US could stop us using the 12nm fabs being built by TSMC on European soil?
Being built as in not operating yet?
12 nm gpu is what? Nvidia 1080/2060 level? Those top researchers mentioned would love to train on that. Also how many gpus would be made annually?
Also what about CPU? You gonna use risc-v? With what toolchain?
Chinese could pull it off in a few years, yeah.
EU? Nah. Started thinking about sovereignty too late compared to China
Things can change quickly. Give it a decade.
Nvidia uses RISC-V as the main controller cores in its GPUs. They’re also exploring replacing their Arm CPU with RISC-V I hear.
Meta recently bought Rivos in a huge show of confidence for RISC-V across processor types for server class.
As for fabrication, the poster above has a lot to learn about both the US’ current weak at-home capabilities (and everything they’re building relies on European suppliers for all the key technology and machines etc.) and about the scaling properties of sub-14nm nodes. Any export controls or sanctions to prevent Europe using American-designed Taiwan-manufactured chips would result in American being cutoff from everything they need to build fabs on US soil. It would backfire massively.
Lastly, the UK and EU already have cutting edge AI Inference chips, and the ones for training are coming this year. Full stack integration (server box, racks, etc) is also being developed this year. We’re not a decade away from doing this - we’re 18 months away. Deployment at scale will take longer - not having Nvidia as competition would be a huge boon for that haha!
The GPUs and AIUs aren't being manufactured in the US.
The EUV and other factory equipment everyone's using is predominantly European. High-end testing tools used in R&D are largely European.
The fabs aren't, and that is no small thing. The tech stack is there though.
It's pretty tiresome that the HN audience keeps assuming Europe doesn't have "tech" because it doesn't have Facebook. Where do you think all the wealth comes from? Europe is all over everyone's R&D and supply chain.
I sometimes wonder whether people realise which country ASML is based in, and which country their major suppliers are in (e.g. optics: Germany)
To make 1/10th the salary they're making now?
You seem to have a very ill-informed view of UK/EU salaries in this particular sector; And also: yeah, people take salary hits to go do things they believe in (this is like, the entire premise of the underpaid American startup founder model) - it should come as no surprise that people are willing to forgo pay for reasons other than just building their own business / making themselves personally wealthy.
We're talking about the "brightest scientist and engineers" in the AI sector, you may be underestimating US salaries for the people that's referring to.
And no, working remotely for US companies doesn't count.
> To make 1/10th the salary they're making now?
Yeah, and also be slapped with some unrealized capital gains tax on assets they acquired while working in the US...
First, the difference isn’t that big in the economically stronger EU countries. Second, you need to factor in cost of living, which by most accounts is lower. Third, meaningful labor laws and a shared appreciation for work-life balance. And finally, to continue the sweeping generalizations, while we celebrate business acumen, we don’t fetishize wealth. People who flaunt money get made fun of, as do sigma grindset hustle bros.
I’ll take a pay cut any day for the ethos of the EU.
> First, the difference isn’t that big in the economically stronger EU countries
It's exactly that big. It's not as big for people with low qualifications, but the more highly qualified the specialist, the greater the difference.
> Second, you need to factor in cost of living, which by most accounts is lower.
But here the difference really isn't that big.
> Third, meaningful labor laws and a shared appreciation for work-life balance.
This works more against EU rather than for them. Peak tech skills aren't usually acquired through laziness around and following meaningful labor laws, even in the EU.
> while we celebrate business acumen, we don’t fetishize wealth
An excuse for poor people (who still fetishize wealth)
That much?
No, of course not.
For the "brightest scientist and engineers" in the AI sector? I wouldn't be so sure.
I agree. And even if those workers stay in the U.S., there’s absolutely no guarantee that they’ll do their best to favor the government’s interests — quite the opposite, if anything.
At the end of the day it’s a matter of incentives, and good knowledge work can’t simply be forced out of people that are unwilling to cooperate.
Well that's quite a leap to make. Plenty of room in between those options.
> ... UAE? :-)
At least you are not paying taxes for the things you don't agree on. It's indeed a strange time we are living in.