AI can be an amazing productivity multiplier for people who know what they're doing.

This result reminded me of the C compiler case that Anthropic posted recently. Sure, agents wrote the code for hours but there was a human there giving them directions, scoping the problem, finding the test suites needed for the agentic loops to actually work etc etc. In general making sure the output actually works and that it's a story worth sharing with others.

The "AI replaces humans in X" narrative is primarily a tool for driving attention and funding. It works great for creating impressions and building brand value but also does a disservice to the actual researchers, engineers and humans in general, who do the hard work of problem formulation, validation and at the end, solving the problem using another tool in their toolbox.

>AI can be an amazing productivity multiplier for people who know what they're doing.

>[...]

>The "AI replaces humans in X" narrative is primarily a tool for driving attention and funding.

You're sort of acting like it's all or nothing. What about the the humans that used to be that "force multiplier" on a team with the person guiding the research?

If a piece of software required a team of ten to people, and instead it's built with one engineer overseeing an AI, that's still 90% job loss.

For a more current example: do you think all the displaced Uber/Lyft drivers aren't going to think "AI took my job" just because there's a team of people in a building somewhere handling the occasional Waymo low confidence intervention, as opposed to being 100% autonomous?

there's 90% job loss assuming that this is a zero sum type of thing where humans and agents compete for working on a fixed amount of work.

I'm curious why you think I'm acting like it's all or nothing. What I was trying to communicate is the exact opposite, that it's not all or nothing. Maybe it's the way I articulate things, I'm genuinely interested what makes it sound like this.

The optimistic case is that instead of a team of 10 people working on one project, you could have those 10 people using AI assistants to work on 10 independent projects.

That, of course, assumes that there are 9 other projects that are both known (or knowable) and worth doing. And in the case of Uber/Lyft drivers, there's a skillset mismatch between the "deprecated" jobs and their replacements.

Well those Uber drivers are usually pretty quick to note that Uber is not their job, just a side hustle. It's too bad I won't know what they think by then since we won't be interacting any more.

Where I work, we're now building things that were completely out of reach before. The 90% job loss prediction would only hold true if we were near the ceiling of what software can do, but we're probably very, very far from it.

A website that cost hundreds of thousands of dollars in 2000 could be replaced by a wordpress blog built in an afternoon by a teenager in 2015. Did that kill web development? No, it just expanded what was worth building

> The "AI replaces humans in X" narrative is primarily a tool for driving attention and funding.

It's also a legitimate concern. We happen to be in a place where humans are needed for that "last critical 10%," or the first critical 10% of problem formulation, and so humans are still crucial to the overall system, at least for most complex tasks.

But there's no logical reason that needs to be the case. Once it's not, humans will be replaced.

The reason there is a marketing opportunity is because, to your point, there is a legitimate concern. Marketing builds and amplifies the concern to create awareness.

When the systems turn into something trivial to manage with the new tooling, humans build more complex or add more layers on the existing systems.

Actually, the results were far worse and way less impressive than what the media said.

the c compiler results or the physics results this post is about?

The C compiler.

of course the results were much worse than what was communicated on the media, it was content marketing not an attempt to build a better c compiler.

His point is going to be some copium like since the c compiler is not as optimized as gcc, it was not impressive.

You probably don’t know what you’re talking about.

Why wasn't the C compiler it made impressive to you?

I found this was the least impressive bit about it https://github.com/anthropics/claudes-c-compiler/issues/1

It didn’t work without gcc and it was significantly worse than gcc with gcc optimizations disabled.

I'm not sure you can call something an optimizing C compiler if it doesn't optimize or enforce C semantics (well, it compiles C but also a lot of things that aren't syntactically valid C). It seemed to generate a lot of code (wow!) that wasn't well-integrated and didn't do what it promised to, and the human didn't have the requisite expertise to understand that. I'm not a theoretical physicist but I will hold to my skepticism here, for similar reasons.

sure, I won't argue on this, although it did manage to deliver the marketing value they were looking for, at the end their goal was not to replace gcc but to make people talk about AI and Anthropic.

What I said in my original comment is that AI delivers when it's used by experts, in this case there was someone who was definitely not a C compiler expert, what would happen if there was a real expert doing this?

Deliver what exactly? False hope and lies?

https://github.com/anthropics/claudes-c-compiler/issues/228

[deleted]

> for people who know what they're doing.

I worry we're not producing as many of those as we used to

We will be producing them even less. I fear for the future graduates, hell even for school children, who are now uncontrollably using ChatGPT for their homework. Next level brainrot

Right. If it hadn't been Nicholas Carlini driving Claude, with his decades of experience, there wouldn't be a Claude c compiler. It still required his expertise and knowledge for it to get there.

[deleted]