1. "if you can't convince yourself by playing anecdotically" is NOT "facts"

2. it's not because the US is incredibly bad at energy spending in AC that it somehow justifies the fact that we would add another, mostly unnecessary, polluting source, even if it's slightly lower. ACs have existed for decades. AI has been exploding for a few years, so we can definitely see it go way, way past the AC usage

also the idea is of "accelerationnism". Why do we need all this tech? What good does it make to have 10 more silly slop AI videos and disinformation campaigns during election? Just so that antirez can be a little bit faster at doing his code... that's not what the world is about.

Our world should be about humans, connecting together (more slowly, not "faster"), about having meaningful work, and caring about planetary resources

The exact opposite of what capitalistic accelerationism / AI is trying to sell us

If you can solve "measure programming productivity with data" you'll have cracked one of the hardest problems in our industry.

> Why do we need all this tech?

Slightly odd question to be asking here on Hacker News!

Sure, but I wasn't the one pretending to have "facts" on AI...

> Slightly odd question to be asking here on Hacker News!

It's absolutely not? The first line of question when you work in a domain SHOULD BE "why am I doing this" and "what is the impact of my work on others"

Yeah, I think I quoted you out of context there. I'm very much in agreement about asking "what is the impact of my work on others".

> If you can solve "measure programming productivity with data" you'll have cracked one of the hardest problems in our industry.

That doesn't mean that we have to accept claims that LLMs drastically increase productivity without good evidence (or in the presence of evidence to the contrary). If anything, it means the opposite.

At the is point the best evidence we have is a large volume of extremely experienced programmers - like antirez - saying "this stuff is amazing for coding productivity".

My own personal experience supports that too.

If you're determined to say "I refuse to accept appeal to authority here, I demand a solution to the measuring productivity problem first" then you're probably in for a long wait.

> At the is point the best evidence we have is a large volume of extremely experienced programmers - like antirez - saying "this stuff is amazing for coding productivity".

The problem is that we know that developers' - including experienced developers' - subjective impressions of whether LLMs increase their productivity at all is unreliable and biased towards overestimation. Similarly, we know that previously the claims of massive productivity gains were false (no study reputable showed even a 50% improvement, let alone the 2x, 5x, 10x, etc that some were claiming, indicators of actual projects shipped were flat, etc). People have been making the same claims for years at this point, and every time when we actually were able to check, it turned out they were wrong. Further, while we can't check the productivity claims (yet) because that takes time, we can check other claims (e.g. the assertion that a model produces code that doesn't need to be reviewed by a human anymore), and those claims do turn out to be false.

> If you're determined to say "I refuse to accept appeal to authority here, I demand a solution to the measuring productivity problem first" then you're probably in for a long wait.

Maybe, but my point still stands. In the absence of actual measurement and evidence, claims of massive productivity gains do not win by default.

There is also plenty of extremely experienced programmers saying "this stuff is useless for programming".

If a bunch of people say "it's impossible to go to the moon, nobody has done it" and Buzz Aldrin says "I have been to the moon, here are the photos/video/NASA archives to prove it", who do you believe?

The equivalent of "we've been to the moon" in the case of LLMs would be:

"Hey Claude, generate a full Linux kernel from scratch for me, go on the web to find protocol definitions, it should handle Wifi, USB, Bluetooth, and have WebGL-backed window server"

And then have it run in a couple of hours/days to deliver, without touching it.

We are *far* from this

OK then, new analogy.

If a bunch of people say "there are no cafes in this town that serve bench on a Sunday" and then Buzz Aldrin says "I just had a great brunch in the cafe over there, here's a photo", who would you listen to?

Well sure, but... that's anecdotical evidence. It's not a formal proof, with studies, etc

Also in the age of AI this argument would be flawed precisely because that "photo" from Buzz Aldrin could be AI-generated, but that's beside the point

Be honest: how many things do you do in your day-to-day SW tasks that have been formally proven and have studies supporting it?

That's just... not the point of that discussion?

1. Most of CS has been formally proven (that's what it's called computer science)

2. Here we were discussing someone who pretends to have "facts" and then just say "just play with it you will understand"...

[dead]

[dead]

Check "confirmation bias": of course the few that speak loudly are those who:

- want to sell you AI

- have a popular blog mostly speaking on AI (same as #1)

- the ones for whom this productivity ehnancement applies

but there's also 1000's of other great coders for whom:

- the gains are negligible (useful, but "doesn't change fundamentally the game")

- we already see the limits of LLMs (nice "code in-painting", but can't be trusted for many reasons)

- besides that, we also see the impact on other people / coders, and we don't want that in our society

Many issues have been pointed in the comments, in particular the fact that most of the things that antirez speaks about is how "LLMs make it easy to fill code for stuff he already knows how to do"

And indeed, in this case, "LLM code in-painting" (eg let the user define the constraints, then act as a "code filler") works relatively nicely... BECAUSE the user knows how it should work, and directed the LLM to do what he needs

But this is just, eg, 2x/3x acceleration of coding tasks for good coders already, this is neither 100x, nor is it reachable for beginner coders.

Because what we see is that LLMs (for good reasons!!) *can't be trusted* so you need to have the burden of checking their code every time

So 100x productivity IS NOT POSSIBLE simply because it would be too long (and frankly too boring) for a human to check the output of 100x of a normal engineer (as long as you don't spend 1000 hours upfront trying to encode all your domain in a theorem-proving language like Lean and then ensure the implementation is checked through it... which would be so costly that the "100x gains" would already have disappeared)

Why would you turn down a 2-3x productivity boost?

Nobody is saying we want to "turn down" (although, this would be a discussion between pros/cons if the boost is "only" 2x and the cons could be "this tech leads to authoritarian regimes everywhere)

What we are discussing here is whether this is a true step-change for coding, or this is merely a "coding improvement tool"

This is obviously a collision between our human culture and the machine culture, and on the surface its intent is evil, as many have guessed already. But what it also does is it separates the two sides cleanly, as they want to pursue different and wildly incompatible futures. Some want to herd sheep, others want to unite with tech, and the two can't live under one sky. The AI wedge is a necessity in this sense.