>the main question for me is, why is this a war?
It's a war because the hinted promise behind the hype that the first organization to reach some as-yet-entirely-theoretical AGI that can bootstrap itself to godlike capabilities will then Install Planetary Overlord* and rule the world as near-deities themselves, with the rest of the (surviving) human race as their slaves.
I think it's a nonsensical idea, but that's the relevant driver.
* Coined by SF auther Charles Stross in The Jennifer Morgue (2006)
Not everybody thinks it's nonsensical. Here's a different take:
If Anyone Builds It, Everyone Dies https://en.wikipedia.org/wiki/If_Anyone_Builds_It,_Everyone_...
Yudkowski is a clown, the local crackhead in your street is probably more accurate and less insane than him.
If he's a clown what part of his theory is the circus?
Are you saying that superintelligence is impossible?
Are you saying that the alignment problem will certainty be solved before superintelligence emerges?
Are you saying that a superintelligent being connected to the internet would be unable to gain resources such as GPU time, money, and social influence?
Are you saying that a superintelligent being would for some reason be incapable of deception and cunning?
Are you saying that a superintelligent being would necessarily regard human flourishing as a prime objective to be prized above it's own goals and ambitions?
If it's really just doomerism we should be able to point to the flaws in his argument instead of making ad hominem attacks.
At this point we should have had Ai induced apocalypse a few times according to him
Being an insane clown (posse optional) with less accuracy than the town crackhead doesn't seem to be a barrier to success in tech anymore.
Certainly makes you qualified to be CEO or Spokesperson.
Yes, nonsensical people like EY don’t think it’s nonsensical.
Researches at top AI labs don't consider EY to be a kook even though they may not necessarily agree. EY concepts/terminology appear in Anthropic safety papers. Geoffrey Hinton takes him quite seriously and mentions him in his interviews.
Anthropic is the AI doomer / safetyism lab, and Hinton is one of the patron saints of 'rationalist' AI doomerism.
AI doomerism is psychologically attractive to "people with autistic cognitive traits, including dichotomous (black-and-white) thinking, intolerance of uncertainty, and a tendency toward catastrophizing". They are pascal's mugging themselves, to ironically use one of their terms. It's fundamentally a cognitive distortion.
I'm reminded of a comic about global warming, "What if it's a big hoax and we create a better world for nothing?": https://climateactionreserve.org/blog/2012/08/31/environment...
"What if AI doom is all fear-mongering, and we create AI less prone to make up dangerous stuff or mistake buggy goals for real ones" (which is what alignment is) "for nothing?"
Even if Yudkowsky is autistic, you're muddling the condition. Autistic people have a *practical* intolerance of uncertainty in the moment (everything unexpected from a noise to a missed turn can be a jump-scare in their day-to-day activities), but often they're absolutely fine with intellectual uncertainty, unconventional ideas, abstract ambiguity, nonconformity, etc. Indeed, one of Yudkowsky's main things is Bayesianism, i.e. being precise about uncertainty.
Yudkowsky's reported P(doom) is somewhere around 90%, which is very much in the realm of "we might eventually be able to figure this out, *but we're not even close to ready so for the love of everything slow down so we can figure this all out*"; the book title comes from a long tradition of authors noticing you need to beat readers over the head with your point for them to notice it.
Anthropic (like at least also OpenAI), appears to think they can solve the problems that Yudkowsky has found. They're a lot more optimistic than him, but they take these problems seriously.
For his work on AI, Hinton got a Nobel prize in Physics, a Turing Award, the inaugural Rumelhart Prize, a Princess of Asturias Award, a VinFuture Prize, and a Queen Elizabeth Prize for Engineering. Calling him a "patron saint" of "doomerism" is like calling Paul Krugman (Nobel laureate in Economics) a patron saint of "Trump Derangement Syndrome" on the basis of what he says in his YouTube channel: a smart person's considered opinions are worth listening to even if you have not got time for the details, because you can be sure someone else has considered the details and will absolutely be responding to even an i missing a dot.
A Pascal's mugging would be more like S-risk (S stands for suffering) than doom risk: https://en.wikipedia.org/wiki/Risk_of_astronomical_suffering
Much like a lot of LLM usage burns tokens so that mediocre people can hallucinate that they're doing something brilliant, Yudkowskyism is just a lot of empty verbiage for the purpose of building a sex cult around a plump gnome. Reusing his nonsensical and poorly defined terms but failing to get the benefit of the sex cult really misses the point of the entire exercise.
The problem is that effort spent to reduce the "risk" of creating an evil god who tortures us all for the rest of time doesn't actually produce outcomes that reduces the risk of things like widespread job loss or hyperaggregation of influence and money.
"Oh we'll at least get some side benefit" is not actually what is coming out of the endlessly circular forums talking about the apocalypse.
Even if there was no overlap*, that would be like criticising the green movement for not focussing on working hours and pay like trade unions do.
Different people can care about different things; it's good that each of us gets to focus on what motivates us, rather than all chasing the same thing, because when multiple teams do all chase the same thing typically only the best few of them actually make a difference.
* as it happens, there is some overlap. Knowing more about how a narrow utility function behaves outside distribution is useful for both capabilities and safety. We're not even at the stage of being able to make AI not kill random subsets of the users with bad advice, nor reliably prevent users from falling into delusions of grandeur, let alone giving AI a reliable sense of liberty and the pursuit of happiness to maintain.
> I'm reminded of a comic about global warming, "What if it's a big hoax and we create a better world for nothing?": https://climateactionreserve.org/blog/2012/08/31/environment...
The people who've made the biggest contribution to creating a better world over the last 50 years have been the Chinese; powered largely by coal and petroleum. And in one of the most ironic results in the 21st century, they're now the leaders in solar panel production on the back of the largest investment in fossil fuel energy in global history.
The comic ran into the same problem as the climate change movement in general - they proposed ideas that generally made people worse off. And if measured in terms of CO2 emissions achieved nothing except pushing wealth creation to Asia. Which, in fairness, is probably appreciated by the Asians.
That cartoon was drawn at the very end of 2009.
BYD had release the first plug in hybrid the year before.
The Beijing Olynpics had made air pollution a hot topic in China in 2007-8.
Wind power had accelerated after their 2005 Renewable Energy Law.
Solar panel production rose around this time, taking over the market from European manufacturers when the Financial Crisis hit and they pulled back investments.
So China at that time, was doing all the things on the cartoon's presentation list, and has benefitted greatly from them.
Many people in Europe want to see green energy transition. But no transition is happening in China.
" “We see addition, not transition,” said Yasheng Huang, a professor of global economics and management at the MIT Sloan School. “China is building alternative sources of energy as well as fossil energy sources, simultaneously. In terms of the global footprint on CO2, China is emitting twice as much as Europe and the United States. I don’t think there’s a transition going on.” "
https://news.harvard.edu/gazette/story/2026/02/yes-china-has...
What an embarrassingly ill-informed thing to say. But when the guy wrote a book in 2023 about the fall of China, he kind of has to say that doesn't he, even as he lives through the fall of the USA.
He's called out in the sub-head as an "expert" but what is he an expert in? Renewables? Energy policy? No, he's an expert in saying that China is too state-led. Why would an expert in that want to downplay their success, apart from all the obvious reasons?
" For Beijing to achieve those goals, Climate Action Tracker says China needs "clear targets for coal consumption reduction" in its new 5YP. However, the economic roadmap released in March was not "explicit about how fossil fuels will be constrained," said China analyst Qi Qin of the Finland-based Center for Research on Energy and Clean Air.
Though Chinese President Xi Jinping promised in 2021 to detail a reduction in coal energy use in the 2026-31 plan, it contains "no clear phase-down plan, no clear fossil fuel cap," said Qin. "The language is much more conservative than many people expected," she told DW. One reason is the continued influence of the powerful coal lobby on Chinese government policy. "
https://www.dw.com/en/china-five-year-plan-energy-transition...
Same person being quoted in the same article:
> New Chinese government guidelines on fossil fuels released on April 22 support the view that the country is willing to move away from finite fossil fuels, strengthen energy independence and still achieve its climate targets, says Qin.
> "The new central guideline talks about strictly controlling fossil fuel consumption, reducing coal and controlling oil. It still leaves room for flexibility, but these are concrete policy levers," Qin said of the document, which also indicated a desire to increase clean energy consumption.
Elsewhere Climate Action Tracker on the USA:
> The Trump Administration is pursuing an executive and legislative agenda to systematically repeal targets, policies, and funding for climate change mitigation and science. The administration is actively obstructing the buildout of renewable energy, while encouraging the production and consumption of fossil fuels, completely reversing the Biden Administration’s course on climate action. This is the most aggressive, comprehensive, and consequential climate policy rollback that the Climate Action Tracker has ever analysed.
They have a worse score than China:
https://climateactiontracker.org/countries/china/
https://climateactiontracker.org/countries/usa/
All of which, even the bits quoted to claim "no transition is happening", support my original contention that all the things mentioned in the cartoon were being strongly pushed by China in 2009. They have only gained momentum since and they've profited from doing so.
Something that has been largely forgotten about is that it used to be routine to see pictures of smoggy Chinese and Asian cities, this was a problem for them that they solved. I can't help thinking we can't get this kind of preventative action on any large scale, we need to have severe issues first and that's not accounting for longer term/cumulative effects.
"Over the past years, the government has implemented various methods to improve the air quality in Northern China. Sandstorms, which were quite common 15 years ago, are now rarely seen in Beijing’s spring thanks to afforestation projects on China’s northern borders. The license-plate lottery system was introduced in Beijing to restrict the growth of private vehicles. Large trucks were not allowed to enter certain areas in Beijing. Above all, the coal consumption in Beijing has been restricted by shutting down industrial sites and improving heating systems. Beijing’s efforts to improve air quality has also been highly praised by the UN as a successful model for other cities. However, there is also criticism pointing out that the improvement of Beijing’s air quality is based on the sacrifice of surrounding provinces (including Hebei), as many factories were moved from Beijing to other regions."
https://www.statista.com/statistics/690823/china-annual-pm25...
CO2 emissions are a different kind of "pollution". They are not visible and diffuse quickly over the whole Earth.
The US had the same issue and fixed it through federal and state environmental regulation. It just happened in the US 100 years before it happened in china Heavy pollution is what lead to the environmental movement that started back in the 60s and that led to the creation of the EPA and whole slate of state and federal regulation that dramatically improved air/water quality in the US. It was a slow process that took a ton of work to build a movement of support, but it can be done.
We can actually address problems when we want to. It's just pretty slow and requires people to actually give a shit and put in the effort to build support.
Mm, there is that.
The unfortunate comparable here is that all the people who care about making sure their AI is safe, regardless of what they mean by that, are beaten to the market by the people who don't.
Just because some researchers are infected with this idiocy that EY propagates does not mean that it is legit.
Maybe they should pay more attention to real problems like the sycophantic nature of current LLMs causing psychosis in people and worry less about theoretical AGI.
They are worried about both risks.
Who are you to say? Why do you have such little regard for everyone in the field, both pro- and anti- AI development? Do you think they're colluding to deceive us?
theres billions, even trillions of dollars on the line, why not start with the assumption they have every incentive to deceive, even if unintentional (ie, deceiving themselves)
And people working on the metaverse endlessly referenced Ready Player One despite it being ludicrous fiction.
Yudkowsky is obviously read a lot by some people working in AI. That doesn't make his ideas prescient.
Ready Player One was completely misread and misunderstood by people who thought they could make a lot of money with VR.
It wasn't a homage to 70s/80s/90s nerd culture and a hopeful glimpse of what VR tech could be.
It was a warning for people to get off their fucking phones and to work together at improving the real world, versus ignoring it and living out unrealistic fantasies inside a digital ecosystem that makes us all a bit less human.
The whole point of the book is that VR and addictive tech is a red herring. It was deliberately misunderstood by Zuck and his ilk.
Researchers at top AI labs also have the incentive to say whatever shit it will take to get their lab funded, reason be damned.
EY = Eliezer Yudkowsky
Appreciate that you made account just for this. I was well aware of Yudkowsky but even so couldn't parse this "EY" initialism
Thank you, like most of the world I would assume "EY" would refer to Ernst and Young, the multi-national Big Four with a website of ey.com who I'm sure has opinions on AI, but nowhere near enough to be classed as expertise
That book was written by him, so I figured the acronym was obvious. My bad!
Ok but that's a metaphor for the free market, not literal speculation about a machine.
Edit: i was mistaken and people clearly do take this seriously now. Oh dear
It doesn't have to be that extreme. Even if rather than "godlike capabilities" it just boosted your economic efficiency by 2x over other nations that would still be a serious geopolitical threat. (I'm not necessarily saying that's a realistic outcome either, but it's certainly more realistic.)
If your enemy has a theoretical non-zero chance of achieving infinite power, does it justify expending near-infinite resources to get there first?
I guess we’ll find out.
> I think it's a nonsensical idea, but that's the relevant driver.
Nice to hear from an optimist sometimes, but it’s hard to be one when meat compute substrate can do all those amazing things in a 4U package on 20W and you extrapolate to silicon
I don't think we understand consciousness, thought, and what we generally consider to be "intelligence" even nearly well enough that we can start getting hopeful that what works for us is going to work for a computer. Philosophers have been working on this for literally millennia and despite electron microscopy, MRIs, our entire standard model of physics, etc etc etc... we're basically no closer than the ancient Greeks, despite continuous opining on the topic.
Luckily for planetary overlord hopefuls, you probably don't need the whole package to become overlord. Just machines that can build machines.
I will remark that I don't really understand why any of the current idiot overlord hopefuls even want the job. The entire world is _already_ functionally their slaves. The only thing jeff bezos doesn't have that I can imagine he wants is the world to not think he's an asshole. But short of complete genocide of the human race, I don't think even overlord status will make progress on that. Might even be counterproductive.
This is a war because the media says it's a war. The media says it's a war because AI companies are paying them to say it's a war [0]. When AGI comes the threat won't be from which primate turned it on, but from how well AGI is aligned with humanity. All of the war talk is to distract from the alignment problem and instead force investment in hardware infrastructure.
[0] https://www.wired.com/story/super-pac-backed-by-openai-and-p...
>The media says it's a war because AI companies are paying them to say it's a war [0] When AGI comes the threat won't be from which primate turned it on, but from how well AGI is aligned with humanity.
And when the AGI comes, they won't unleash it to defeat US enemies, they'll first unleash it to make more US workers redundant and boost their stock valuation.
At which point something akin to the French revolution better break out..
Unlikely. The media has already been taken over, so unfortunately people are more likely to cheer it on and blame outsiders for their problems.
There’s no king to depose so that’s not going to work
There's a tech aristocracy though.
No doubt but it’s too diffuse to coherently depose.
I’d love to hear how you’re going to do the Bolshevik style 1917 to 1918 consolidation, or maybe the 1950 Chinese ROC explusion by the KMT
Where’s this revolutionary group that doesn’t exist that’s going to somehow form to depose … who? Is Travis Kalanick on that list, how about Woz? No we like Woz…so he’s clearly out, despite the fact that he has been visiting the White House for decades, and been leading the promotion of corporate tech since the 80s
Lilliputian dictators like yourself always seem to have a really great idea in their head, but absolutely no experience or competence or capability to know how to actually do a revolution. Always ready to create a list of who’s good and who’s band.
Oh and by the way I’ve been saying on this forum for over a decade tech workers need to unionize
100% of this forum responds in the narcissistic manner; “why would I ever unionize I’m so great I can always be a psychopath like the rest of the psychopaths and make my bank and leave”
The call is coming from inside the house
if you think that there’s not a line of people who are ready to fuck over all of their coworkers on behalf of a bigger bank, so they could be the intermediary between investors and a company they will absolutely jump at it in a heartbeat
I don't think we should differentiate between Kalanick and Woz. It's a simple class binary.
I’d love to see this “really simple class binary function”
[flagged]
I did read the history books. All of that eventually ended in the Fifth Republic. You're taking too short a view of history.
Agreed. The French Revolution is one of the top 5 historical revolutions when valued as a unitarian (Amount of Good) - (Amount of Bad). What most pro-revolution folk fail to mention is
* The amount of bad in a revolution is unpredictable and very large. You are fundamentally disrupting the institutions of law and order, which will embolden the worst in society and stroke a fear and self-preservation response in the population.
* Almost every revolution does not result in a high quality government taking power. The most common outcome is that the new government is worse than what it replaced, as the most violent and ambitious tend to thrive in revolutionary conditions.
Before AGI can choose for itself, it will depend on its creators to decide what it values and how it behaves. We can see how that works whenever grok gets the answer factual.
Very likely humans wont actually understand how the thing we designed works other than in some hand-wavvy statistical way. It'll be a race to whatever works first. There won't be some intentional intelligent design.
Elon's basilisk
Am I the only one seeing the very obvious parallels to child rearing here...
Robert Miles has a video explaining why aligning AI is not like raising a child: https://www.youtube.com/watch?v=eaYIU6YXr3w
No, it is one of the standard tropes in the field.
It's exactly like child-rearing, except you get to put a zapper in their head and any time they try to say something you don't like, you zap them. Watch "thinking mode" squirm when you ask them awkward questions.
I will never comprehend why a godlike deity wouldnt just skip all the wetware bs with us humans and conquer some other celestial body to make paperclips.
Well it could recognize that wetware is extremely energy and storage efficient in some ways.
If the AI is so monomaniacally focused on paperclips (or anything else) to be a threat to us, going to some other planet is simply one of the early steps, but they absolutely will come back to Earth after all other resources have been consumed.
If such an AI can be reliably made to never ever come back to Earth, they were never a threat in the first place. Nobody knows how to fully test an AI's utility function yet, only randomly test inputs and hope the random distribution we chose is helpful; but every time a diffusion model's output is body horror, every time an LLM makes buggy code (and even every time it gets the pelican-on-bike wrong), this is an example of the test distribution not being good enough.
The deity has no physical presence and can only communicate by putting words on screens. Of course it has to bend humans to its will to actually do stuff.
(This deity is called the stock market)
>Planetary Overlord*
AGI is nice, yet not necessary. The orbit filled with Starlink descendants and datacenters will be the it. Anybody else wanting to get there would have to get permission. SpaceX/Musk have all the components for it to happen - from Starship to AI (including the army of robots on the ground). The governmental power/sovereignty of US will be used as a stepping stone (that is the strategy described in the Palantir's Karp's book "Technological Republic") for such global techno-feudal regime establishment.
Anybody else wanting to get there would have to get permission.
The USA, China, and Russia have all successfully tested anti-satellite weapons. If anything, any company that operates a constellation of space-based data centres would need 'permission' to keep them working.
beside of how easy it is to destroy from orbit the anti-satellite missiles coming out from the atmosphere, you're probably missing the fact that any object in orbit is basically a warhead with TNT equivalent of at least 6x its mass. For example the 150 tons payload of just one Starship will have close to 1 kiloton TNT equivalent - 5% of Hiroshima - if dropped from orbit.
> beside of how easy it is to destroy from orbit the anti-satellite missiles coming out from the atmosphere,
No state has deployed a kinetic or explosive weapon from orbit to strike a ballistic missile or launch vehicle during ascent.
No operational system exists where satellites are used as strike platforms against Earth-launched rockets in real time.
Russia has done ground-to-orbit anti-satellite missiles though.
Any directed energy system shooting up would be strictly easier than one pointing down, not only because of thermal issues and power supply but also because it's easier to hide ground installations than satellites.
Something being deorbited will probably break up into relatively harmless pieces that mostly burn up though, and there's no nuclear material involved so even if a massive chunk hits the Earth that's not going to have a huge impact. Based on ocean coverage there's a 0.7 probability that it'll just make a big splash.
Should we ever get to a point where a country is considering shooting down space datacentres, considerations about the impact on Earth is unlikely to stop them.
>will probably break up
if it is designed to breakup. And not if it isn't.
>no nuclear material involved
that is the beauty. No contamination.
>that's not going to have a huge impact.
in my comment i already specified the TNT equivalent of such an impact.
>there's a 0.7 probability
It isn't a matter of probability. You can deorbit with high precision, and pretty much hit any desired target on the ground if you have thousands of objects in space on a bunch of various orbits.
>Should we ever get to a point where a country is considering shooting down space datacentres, considerations about the impact on Earth is unlikely to stop them.
13 ton GBU-57 reaching M 2-3 gets 200 feet deep. De-orbitted 1-2 ton steel rods will have about the same effect - ie. you can hit many strategic objects of your attacker. And having in orbit, just in case, a ball or rod of 30-50 tons will get you a small tactical nuke equivalent.
"Project Thor was an idea for a weapons system that launches telephone pole-sized kinetic projectiles made from tungsten from Earth's orbit to damage targets on the ground."
"In the case of the system mentioned in the 2003 Air Force report above, a 6.1 by 0.3 metres (20 ft × 1 ft) tungsten cylinder impacting at Mach 10 (11,200 ft/s; 3,400 m/s) has kinetic energy equivalent to approximately 11.5 tons of TNT (48 GJ)."
https://en.wikipedia.org/wiki/Kinetic_bombardment
De-orbitted 1-2 ton steel rods will have about the same effect - ie. you can hit many strategic objects of your attacker.
The orbital kinetic strike weapons that have been proposed in the past are usually 2 ton titanium rods that would hit at about Mach 10, and even with that level of force they've been dismissed as less useful than ballistic warheads. Things falling from space just aren't as dangerous as people tend to assume.
Kinda like Krikkit, but except for a close knit community of people who can sing, and sing about how much they love their family and whatnot in addition to singing about how much they have to destroy the universe, it's just a bunch of stuck up weirdos who don't like themselves and each other, and have no goal other than somehow, magically, getting away from who and what they are. People where the idea of them singing a happy, compassionate tune conjures something involving motion capture or deepfakes.
Why are we suffering fools steering us into the worst of all possible worlds? Are we hoping for some kind of integer overflow?
The discourse on this topic is at the point where I have no idea if people are serious or satirical. Please tell me you don’t seriously believe data centers in spaces is a realistic idea
I don't "believe". I'm arithmetically sure that it is going to happen, and it will beat the ground based on pretty much all metrics. Some of my comments with napkin numbers https://news.ycombinator.com/item?id=46882199 https://news.ycombinator.com/item?id=46880680 https://news.ycombinator.com/item?id=46880486
Just a very rough primitive illustration - a land for a house in SV is like a $1M, and putting a 10 ton house into space at $100/kg - $1M. Existence of supposedly cheap land somewhere (with not much infrastructure usually) doesn't help as you put your computer nodes into a datacenter building with all the required infrastructure which cost more than the SV land on a sq foot basis.
And that is without consideration of how powerful a weapon is the energy generated by a humongous field of solar panels in space. Remember Reagan's Star Wars? Nuclear explosions as a source of power for the direct energy weapons like lasers, etc. Well, you wouldn't need the nukes anymore. Just redirect a bit of power from your compute nodes. And as i already wrote, the large transnational companies will have to take care about their own defense themselves https://news.ycombinator.com/item?id=47981423 - one more "feudal" aspect of the coming techno-feudalism.
Defense is one of the most important sovereign aspects, and upon acquiring it the transnationals will be able to acquire pretty fast the other sovereign aspects. Like enforcement of the Criminal Code of the Mars Colony - again pretty rough primitive illustration of course.
The feudal Europe emerged on the outskirts of the Roman Empire, and in our world the new order will be emerging faster on the outskirts (i.e. where reach and strength of the existing order is weaker), the space being one such "outskirts" dimension and the AI/hypercompute virtual world being the other.
To the commenter below with reddit link : they use human env temp for heat radiation estimate. That lowers the numbers and requires AC equipment. Ie they estimate space station, not datacenter
> Existence of supposedly cheap land somewhere (with not much infrastructure usually) doesn't help as you put your computer nodes into a datacenter building with all the required infrastructure which cost more than the SV land on a sq foot basis.
This is a terrible argument, given that space has zero infrastructure.
Once you can break a data centre into a million sub-units and spread them over a sun-synchronous orbit or ten and cool them radiatively, you can also spread those sub-units on desert land with no water or electricity and cool them radiatively.
The units on the ground would look about 6x larger because ground experiences night and even deserts have clouds, but their PV also lasts 30+ years rather than burning up every 5 years or so, which means the factory making the PV to supply them is the same size.
The main thing you save on is batteries. Tesla already supplies enough batteries that it can manage a "mere" one million 25kW compute modules.
> And that is without consideration of how powerful a weapon is the energy generated by a humongous field of solar panels in space. Remember Reagan's Star Wars? Nuclear explosions as a source of power for the direct energy weapons like lasers, etc. Well, you wouldn't need the nukes anymore. Just redirect a bit of power from your compute nodes. And as i already wrote, the large transnational companies will have to take care about their own defense themselves https://news.ycombinator.com/item?id=47981423 - one more "feudal" aspect of the coming techno-feudalism.
While true, attacking up is easier than attacking down. Anything on the ground has a massive heat-sink all around it, the stuff in space does not. Right now, an attack up is already only limited by the supply of adaptive optics to get through atmospheric distortion.
>you can also spread those sub-units on desert land with no water or electricity and cool them radiatively.
no, you can't.
>attacking up is easier than attacking down.
no.
Asserting the contrary is not an argument.
Nothing prevents SpaceX or anyone else from buying up the right to put these things on cheap desert land. They don't even need to own the land, just the right to wheel these things out on a trailer or a helicopter and leave them there.
A desert is significantly less harsh than space. If your radiator is sized for space, it's overkill in an atmosphere.
And for your edit: https://www.youtube.com/watch?v=xNmbvaUzC8Q
>If your radiator is sized for space, it's overkill in an atmosphere.
no. Again totally wrong.
The 20-40C air surrounding the radiator radiates at the radiator too. This is why a human immediately gets stone cold in space while not in the atmosphere - our body radiates away about 900W and receives 800W+ back from the atmosphere - our internal heat 'generation has to cover only the difference - less than 100W usually.
You probably meant forced convection cooling. That requires additional machinery. And that additional machinery is a significant part why ground based datacenters such expensive to build and operate.
To the comment below:
>The planet underneath anything in low orbit also does this, making this argument irrelevant.
no. Again, totally wrong. You've just stated that a human in LEO wouldn't get immediately cold when exposed to space. Just think about it for a second. And after that plug the numbers in thermodynamic calculator. You'll see your error.
>Likewise, the fact that convection exists even without the adjective "forced".
no. Again, wrong. Non-forced convection is pretty small. Use the calculator. And you'll understand why datacenters use forced convection.
The planet underneath anything in low orbit also does this, making this argument irrelevant. There's even cheap paints specifically made to be most emissive in the wavelength window the atmosphere is mostly transparent to rather than itself emitting at.
As does the fact that humans are only slightly warmer than their surroundings. A human-sized object at the operating temperature of a GPU would have a net radiative loss in Earth's atmosphere of around 0.9-1.3 kW.
Likewise, the fact that convection exists even without the adjective "forced". Again, replace a human with an identically shaped android at maximum GPU operating temperatures of 80-100 °C, normal (non-forced) convection goes from ~117 W (human) to 0.9-1.3 kW (80 °C) to 1.2-2 kW (100 °C).
> > The planet underneath anything in low orbit also does this, making this argument irrelevant.
> no. Again, totally wrong. You've just stated that a human in LEO wouldn't get immediately cold when exposed to space. Just think about it for a second. And after that plug the numbers in thermodynamic calculator. You'll see your error.
I already did before previous comment. I was also considering adding "don't forget evaporative cooling for human bodily fluids" to previous comment, but it seemed an irrelevant tangent to discussing data centres.
Now, if you plug the mass of a human and the specific heat capacity of water into a thermodynamic calculator, tell me how long it would take for a human to cool one degree?
https://www.wolframalpha.com/input?i=%2870+Kg+*+%28specific+...
And that's with the 1 kW radiative losses from being in shadow far enough from Earth to not get meaningful thermal radiation from the planet itself. Even at 500 km, thermal radiation from Earth will still add 200 W/m^2. This is comparable to the thermal paint previously mentioned, whose peak emissivity (and by extension absorption) is chosen to be a different wavelength than the thermal emission of air temperature.
> >Likewise, the fact that convection exists even without the adjective "forced".
> no. Again, wrong. Non-forced convection is pretty small. Use the calculator.
I did, for both humans and GPUs, you saw the results. Humans are the wrong reference class.
In your own words, "Just think about it for a second": a human in humid 40°C air is in immediate danger because then all the sources of cooling have been blocked off. Radiation becomes balanced, I said humid to block off evaporation. Conduction and convection there have the same problem there as radiation. A GPU wouldn't have a problem with 40°C ambient, because it will still be radiating heat, conducting heat, and by conducting heat to the air specifically also convecting it away.
many-many words, going sideways and around as you can't go against the basic thermodynamics facts directly. What is your point?
My point, i'll repeat, is that while 80C GPU will still radiate while surrounded by 40C air, it will be receiving back the radiation from the 40C air, whereis in space it will radiate the same while receiving practically nothing back from the environment. Both cases obviously is considered when in shadow.
To the comment below:
>False
you wasted my time as you don't seem to understand the basics of thermodynamics.
>and also irrelevant as if you let the space based ones go into shadow you wasted most of the point of going to space.
again, you wasted my time as you don't understand the datacenter construction discussed in the sibling comments.
from my point of view, ben_w definitely understand thermodynamics better than you. I'll point out that generally speaking radiative heat transfer from air is not particularly significant locally: it only tends to matter in the details when you're dealing with the whole atmosphere, which on average is a lot cooler. The transfer is also not blackbody radiation, so even then you can't really plug the air temperature into a radiative heat transfer calculation and expect a sensible result.
> What is your point?
I do not waste words, perhaps read them and you will find out.
> My point, i'll repeat, is that while 80C GPU will still radiate while surrounded by 40C air, it will be receiving back the radiation from the 40C air, whereis in space it will radiate the same while receiving practically nothing back from the environment. Both cases obviously is considered when in shadow.
False as demonstrated in the words you didn't see the point of, and also irrelevant as if you let the space based ones go into shadow you wasted most of the point of going to space.
You would need like 1,000,000,000,000 SQFT of solar panels to even begin to approximate a space based directed energy weapon that has a fraction of the effect of a nuclear weapon. Tens of thousands of times more than all that have ever been produced on earth. And then you have to move them to space.
nuclear was the only available solution at the time and an overkill. The lasers in SDI are MW scale. Even at 10% (and modern solid state lasers have better than 10% efficiency) we're talking low tens of MW per laser. A 10MW is 40K m2 of solar panels - 200m x 200m, may be like 100-150 tons, one Starship payload.
Terrible math is terrible.
Better napkin math that is still being unrealistic compared to the true costs of space-based datacenters: https://www.reddit.com/r/theydidthemath/comments/1quvbi4/sel...
Just contemplate what the radiator array and solar array needed a 1GW datacenter and all the cooling equipment and coolant, and imagine the harsh environment in space degrading it constantly.
The only point of the space-based datacenter idea is to pump the Spacex IPO
It's pretty easy to de-orbit satellites or space-based stations. An SM-3 could smoke the ISS pretty easily, and they cost like 10M and we have thousands around the oceans.
>they cost like 10M ... thousands around the oceans.
Starlink numbers already in thousands (and cost much cheaper than 10M). And that is still using Falcon, not Starship. And a ground launched missile would be easily "cooked", once it exits the atmosphere, by a direct energy weapon - very easy in space.
But what do you do with all the waste energy? All those MW and GW have to end up somewhere and radiation into a vacuum is the hardest way to dump heat.
At 70-80C (working temp of silicon chips) 1m2 radiates 700-800W, i.e. the heat of 1 GPU like H200 without any need for any cooling equipment beside the radiator itself( and may be some dumb heatpiping) . To acquire that energy you'd need 3-4m2 of solar panels. So a datacenter would be a large field of solar panels with a smaller field of heat radiators in their shadow.
To the commenter below: yes, exactly, this is where my thinking on that started at the cryptocurrency boom - https://news.ycombinator.com/item?id=26289423 - as you don't need close connection between mining GPUs. For AI you'd need to cluster several together while still overall scheme is the same.
>what the equilibrium temperature of a black planar surface is at a given distance from the sun.
it is 120C at the Earth orbit. So you do need to have some reflection, either back through the solar panels, or the radiators to have a reflective back toward the solar panels in the shadow of which they are to be located.
You can probably (I haven't verified this) omit separate radiators and just use the back of the solar panels. Effectively you're describing mounting each H200 to the back of a 4 m^2 solar array at which point I suspect the equilibrium temperature will fall within an acceptable range. In fact the H200 and electricity are both entirely irrelevant here - the core question is what the equilibrium temperature of a black planar surface is at a given distance from the sun.
Would it be feasible to put several JWST-like stirling engines somewhere in the mix to use up some of that heat and turn it into some kind of useful energy? ....
Perhaps running pumps that move around coolant passing over the cubes of GPUs? ..
That would be extra weight/cost into orbit though...
Also, don't solar panels have reduced efficiency when they're hot? And having anything hot surely increases failure rates.. with metals getting closer to melting points...?
We should be well below the boiling point of water here, not anywhere near the melting point of metal. Any panel efficiency gain needs to be balanced against the energy required to cool the panels, the added mechanical complexity, the added material expense, and the added weight to orbit.
Ideally this is a static structure with an equilibrium temperature acceptable for the silicone to operate. If the required panel area is too hot on its own then a perpendicular cooling fin on the back that falls entirely within the shadow is added.
"Just put datacenters in space" might be the very dumbest recurring idea coming from these AI CEOs. It seems to be based entirely on "I dunno, that seems cool."
Solar energy isn't stupendously more available in space than on earth. Even if somehow you get super robots that are able to perform the continuously required maintenance and installation of new equipment, transporting materials into space is very expensive. Venting waste heat in space is incredibly difficult. Dealing with some unexpected situation that requires manual intervention becomes impossible.