> facts are facts, and AI is going to change programming forever

Show me these "facts"

If you can't see this by working with Claude Code for a few weeks, I don't want to go into bigger efforts than writing a blog post to convince you. It's not a mission, mine. I just want to communicate with the part of people that are open enough to challenge their ideas and are willing to touch with their hands what is happening. Also, if you tried and failed, it means that either for your domain AI is not good enough, or you are not able to extract the value. The fact is, this does not matter: a bigger percentage of programmers is using AI with success every day, and as it progresses this will happen more and in more diverse programming fields and tasks. If you disagree and are happy to avoid LLMs, well, it's ok as well.

okay, but again: if you say in your blog that those are "facts", then... show us the facts?

You can't just hand-wavily say "a bigger percentage of programmers is using AI with success every day" and not give a link to a study that shows it's true

as a matter of fact, we know that a lot of companies have fired people by pretending that they are no longer needed in the age of AI... only to re-hire offshored people for much cheaper

for now, there hasn't been a documented sudden increase in velocity / robustness for code, a few anecdotical cases sure

I use it myself, and I admit it saves some time to develop some basic stuff and get a few ideas, but so far nothing revolutionary. So let's take it at face value:

- a tech which helps slightly with some tasks (basically "in-painting code" once you defined the "border constraints" sufficiently well)

- a tech which might cause massive disruption of people's livelihoods (and safety) if used incorrectly, which might FAR OUTWEIGH the small benefits and be a good enough reason for people to fight against AI

- a tech which emits CO2, increases inequalities, depends on quasi slave-work of annotators in third-world countries, etc

so you can talk all day long about not dismissing AI, but you should take it also with everything that comes with it

1. If you can't convince yourself, after downloading Claude Code or Codex and playing with them for 1 week, that programming is completely revolutionized, there is nothing I can do: you have it at your fingertips and you search for facts I should communicate for you.

2. The US alone air conditioning usage is around 4 times the energy / CO2 usage of all the world data centers (not just AI) combined together. AI is 10% of the data centers usage, so just AC is 40 times that.

I enjoyed about your blog post, but I was curious about the claim in point 2 above. I asked Claude and it seems the claim is false:

# Fact-Checking This Climate Impact Claim

Let me break down this claim with actual data:

## The Numbers

*US Air Conditioning:* - US A/C uses approximately *220-240 TWh/year* (2020 EIA data) - This represents about 6% of total US electricity consumption

*Global Data Centers:* - Estimated *240-340 TWh/year globally* (IEA 2022 reports) - Some estimates go to 460 TWh including cryptocurrency

*AI's Share:* - AI represents roughly *10-15%* of data center energy (IEA estimates this is growing rapidly)

## Verdict: *The claim is FALSE*

The math doesn't support a 4:1 ratio. US A/C and global data centers use *roughly comparable* amounts of energy—somewhere between 1:1 and 1:1.5, not 4:1.

The "40 times AI" conclusion would only work if the 4x premise were true.

## Important Caveats

1. *Measurement uncertainty*: Data center energy use is notoriously difficult to measure accurately 2. *Rapid growth*: AI energy use is growing much faster than A/C 3. *Geographic variation*: This compares one country's A/C to global data centers (apples to oranges)

## Reliable Sources - US EIA (Energy Information Administration) for A/C data - IEA (International Energy Agency) for data center estimates - Lawrence Berkeley National Laboratory studies

The quote significantly overstates the disparity, though both are indeed major energy consumers.

I tried Claude on a project where I'd got stuck trying to use some MacOS media APIs in a Rust app.

It just went in circles between something that wouldn't compile, and a "solution" that compiled but didn't work despite the output insisting it worked. Anything it said that wasn't already in the (admittedly crap) Apple documentation was just hallucination.

Not exactly what I'd describe as "revolutionary".

So you don't actually have anything to support your argument other than "trust me bro". Oh, how the mighty have fallen.

A useful skill in both software engineering and life is figuring out, based on prior reputation and performance, who you should trust.

It is a useful skill. But regardless of the theme at hand there is also

"You either die a hero or you live long enough to see yourself become the villain."

People change all the time, and things need to be reevaluated from time to time.

So another skill is to disengage with our heroes when the values start misalign.

That sound more like software pseudo-engineering to me.

A bit like we should trust RFK on how "vaccines don't work" thanks to his wide experience?

The idea here is not to say that antirez has no knowledge about coding or software engineering, the idea was that if he says "hey we have the facts", and then when people ask "okay, show us the fact" he says: "just download claude code and play with it one hour and you have the facts" we don't trust that, that's not science

That's a great example in support of my argument here, because RFK Jr clearly has no relevant experience at all - so "figuring out, based on prior reputation and performance, who you should trust" should lead you to not listen to a word he says.

Well guess what, a lot of people will "trust him" because he is a "figure of power" (he's a minister of the current administration). So that's exactly why "authority arguments" are bad... and we should rely on science and studies

1. "if you can't convince yourself by playing anecdotically" is NOT "facts"

2. it's not because the US is incredibly bad at energy spending in AC that it somehow justifies the fact that we would add another, mostly unnecessary, polluting source, even if it's slightly lower. ACs have existed for decades. AI has been exploding for a few years, so we can definitely see it go way, way past the AC usage

also the idea is of "accelerationnism". Why do we need all this tech? What good does it make to have 10 more silly slop AI videos and disinformation campaigns during election? Just so that antirez can be a little bit faster at doing his code... that's not what the world is about.

Our world should be about humans, connecting together (more slowly, not "faster"), about having meaningful work, and caring about planetary resources

The exact opposite of what capitalistic accelerationism / AI is trying to sell us

If you can solve "measure programming productivity with data" you'll have cracked one of the hardest problems in our industry.

> Why do we need all this tech?

Slightly odd question to be asking here on Hacker News!

Sure, but I wasn't the one pretending to have "facts" on AI...

> Slightly odd question to be asking here on Hacker News!

It's absolutely not? The first line of question when you work in a domain SHOULD BE "why am I doing this" and "what is the impact of my work on others"

Yeah, I think I quoted you out of context there. I'm very much in agreement about asking "what is the impact of my work on others".

> If you can solve "measure programming productivity with data" you'll have cracked one of the hardest problems in our industry.

That doesn't mean that we have to accept claims that LLMs drastically increase productivity without good evidence (or in the presence of evidence to the contrary). If anything, it means the opposite.

At the is point the best evidence we have is a large volume of extremely experienced programmers - like antirez - saying "this stuff is amazing for coding productivity".

My own personal experience supports that too.

If you're determined to say "I refuse to accept appeal to authority here, I demand a solution to the measuring productivity problem first" then you're probably in for a long wait.

> At the is point the best evidence we have is a large volume of extremely experienced programmers - like antirez - saying "this stuff is amazing for coding productivity".

The problem is that we know that developers' - including experienced developers' - subjective impressions of whether LLMs increase their productivity at all is unreliable and biased towards overestimation. Similarly, we know that previously the claims of massive productivity gains were false (no study reputable showed even a 50% improvement, let alone the 2x, 5x, 10x, etc that some were claiming, indicators of actual projects shipped were flat, etc). People have been making the same claims for years at this point, and every time when we actually were able to check, it turned out they were wrong. Further, while we can't check the productivity claims (yet) because that takes time, we can check other claims (e.g. the assertion that a model produces code that doesn't need to be reviewed by a human anymore), and those claims do turn out to be false.

> If you're determined to say "I refuse to accept appeal to authority here, I demand a solution to the measuring productivity problem first" then you're probably in for a long wait.

Maybe, but my point still stands. In the absence of actual measurement and evidence, claims of massive productivity gains do not win by default.

There is also plenty of extremely experienced programmers saying "this stuff is useless for programming".

If a bunch of people say "it's impossible to go to the moon, nobody has done it" and Buzz Aldrin says "I have been to the moon, here are the photos/video/NASA archives to prove it", who do you believe?

The equivalent of "we've been to the moon" in the case of LLMs would be:

"Hey Claude, generate a full Linux kernel from scratch for me, go on the web to find protocol definitions, it should handle Wifi, USB, Bluetooth, and have WebGL-backed window server"

And then have it run in a couple of hours/days to deliver, without touching it.

We are *far* from this

OK then, new analogy.

If a bunch of people say "there are no cafes in this town that serve bench on a Sunday" and then Buzz Aldrin says "I just had a great brunch in the cafe over there, here's a photo", who would you listen to?

Well sure, but... that's anecdotical evidence. It's not a formal proof, with studies, etc

Also in the age of AI this argument would be flawed precisely because that "photo" from Buzz Aldrin could be AI-generated, but that's beside the point

Be honest: how many things do you do in your day-to-day SW tasks that have been formally proven and have studies supporting it?

That's just... not the point of that discussion?

1. Most of CS has been formally proven (that's what it's called computer science)

2. Here we were discussing someone who pretends to have "facts" and then just say "just play with it you will understand"...

[dead]

[dead]

Check "confirmation bias": of course the few that speak loudly are those who:

- want to sell you AI

- have a popular blog mostly speaking on AI (same as #1)

- the ones for whom this productivity ehnancement applies

but there's also 1000's of other great coders for whom:

- the gains are negligible (useful, but "doesn't change fundamentally the game")

- we already see the limits of LLMs (nice "code in-painting", but can't be trusted for many reasons)

- besides that, we also see the impact on other people / coders, and we don't want that in our society

Many issues have been pointed in the comments, in particular the fact that most of the things that antirez speaks about is how "LLMs make it easy to fill code for stuff he already knows how to do"

And indeed, in this case, "LLM code in-painting" (eg let the user define the constraints, then act as a "code filler") works relatively nicely... BECAUSE the user knows how it should work, and directed the LLM to do what he needs

But this is just, eg, 2x/3x acceleration of coding tasks for good coders already, this is neither 100x, nor is it reachable for beginner coders.

Because what we see is that LLMs (for good reasons!!) *can't be trusted* so you need to have the burden of checking their code every time

So 100x productivity IS NOT POSSIBLE simply because it would be too long (and frankly too boring) for a human to check the output of 100x of a normal engineer (as long as you don't spend 1000 hours upfront trying to encode all your domain in a theorem-proving language like Lean and then ensure the implementation is checked through it... which would be so costly that the "100x gains" would already have disappeared)

Why would you turn down a 2-3x productivity boost?

Nobody is saying we want to "turn down" (although, this would be a discussion between pros/cons if the boost is "only" 2x and the cons could be "this tech leads to authoritarian regimes everywhere)

What we are discussing here is whether this is a true step-change for coding, or this is merely a "coding improvement tool"

This is obviously a collision between our human culture and the machine culture, and on the surface its intent is evil, as many have guessed already. But what it also does is it separates the two sides cleanly, as they want to pursue different and wildly incompatible futures. Some want to herd sheep, others want to unite with tech, and the two can't live under one sky. The AI wedge is a necessity in this sense.

Just dismiss what he says and move on, he's already made it clear he's not trying to convince you.

How does widespread access to AI tools increase inequalities?

It's pretty clear that if AI delivers on its promise it'll decimate the income of all but the top 1% developers

Labor is worth less, capital and equity ownership make more or the same

I don't think that's a forgone conclusion yet.

I continue to hope that we see the opposite effect: the drop of cost in software development drives massively increased demand for both software and our services.

I wrote about that here: https://simonwillison.net/2026/Jan/8/llm-predictions-for-202...

I keep flip-flopping between being optimistic and pessimistic on this, but yeah we just need to wait and see

Because as long as it is done in a capitalistic economy, it will be excluding the many from work, while driving profits to a few

Why do you care so much to write a blog post? Like if it's such a big advantage, why not stay quiet and exploit it? Why not make Anti-AI blog posts to gain even more of an advantage?

One of the big red flags I see around the pro-AI side is this constant desire to promote the technology. At least the anti-ai side is reactionary.

It seems quite profitable nowadays to position yourself as [insert currently overhyped technology] GURU to generate clicks/views. Just look at the amount of comments in this thread.

"Like if it's such a big advantage, why not stay quiet and exploit it?"

Maybe he's a generous person.

Replace "Claude Code" or "AI" with "Jesus". It all sounds very familiar.

I am waiting people to commits their prompt/agents setup instead of the code to call this a changing paradigm. So far it is "just" machine generating code and generating code doesn't solve all the software problem (but yeah they get pretty good at generating code)

If you want an example, I just open-sourced a project which includes the prompts and CLAUDE.md: https://github.com/minimaxir/miditui/tree/main/agent_notes

[dead]