> They are experienced open source developers, working on their own projects

I just started working on a 3-month old codebase written by someone else, in a framework and architecture I had never used before

Within a couple hours, with the help of Claude Code, I had already created a really nice system to replicate data from staging to local development. Something I had built before in other projects, and I new that manually it would take me a full day or two, especially without experience in the architecture

That immediately sped up my development even more, as now I had better data to test things locally

Then a couple hours later, I had already pushed my first PR. All code following the proper coding style and practices of the existing project and the framework. That PR, would have taken me at least a couple of days and up to 2 weeks to fully manually write out and test

So sure, AI won’t speed everyone or everything up. But at least in this one case, it gave me a huge boost

As I keep going, I expect things to slow down a bit, as the complexity of the project grows. However, it’s also given me the chance to get an amazing jumpstart

I have had similar experiences as you, but this is not the kind of work that the study is talking about:

“When open source developers working in codebases that they are deeply familiar with use AI tools to complete a task, they take longer to complete that task”

I have anecdotally found this to be true as well, that an LLM greatly accelerates my ramp up time in a new codebase, but then actually leads me astray once I am familiar with the project.

> I have anecdotally found this to be true as well, that an LLM greatly accelerates my ramp up time in a new codebase, but then actually leads me astray once I am familiar with the project.

If you are unfamiliar with the project, how do you determine that it wasn't leading you astray in the first place? Do you ever revisit what you had done with AI previously to make sure that, once you know your way around, it was doing it the right way?

In some cases, I have not revisited, as I was happy to simply make a small modification for my use only. In others, I have taken the time to ensure the changes are suitable for upstreaming. In my experience, which I have not methodically recorded in any way, the LLM’s changes at this early stage have been pretty good. This is also partly because the changes I am making at the early stage are generally small, usually not requiring adding new functionality but simply hooking up existing functionality to a new input or output.

What’s most useful about the LLM in the early stages is not the actual code it writes, but its reasoning that helps me learn about the structure of the project. I don’t take the code blind, I am more interested in the reasoning than the code itself. I have found this to be reliably useful.

no, they just claim that AI coding tools are magic and drink their kool-aid

> I have anecdotally found this to be true as well, that an LLM greatly accelerates my ramp up time in a new codebase, but then actually leads me astray once I am familiar with the project.

How does using AI impact the amount of time it takes you to become sufficiently familiar with the project to recognize when you are being led astray?

One of the worries I have with the fast ramp-up is that a lot of that ramp-up time isn't just grunt work to be optimized a way, it's active learning, and bypassing too much of it can leave you with an incomplete understanding of the problem domain that slows you down perpetually.

Sometimes, there are real efficiencies to be gained; other times those perceived efficiencies are actually incurring heavy technical debt, and I suspect that overuse of AI is usually the latter.

Not just new code-bases. I recently used an LLM to accelerate my learning of Rust.

Coming from other programming languages, I had a lot of questions that would be tough to nail down in a Google search, or combing through docs and/or tutorials. In retrospect, it's super fast at finding answers to things that _don't exist_ explicitly, or are implied through the lack of documentation, or exist at the intersection of wildly different resources:

- Can I get compile-time type information of Enum values?

- Can I specialize a generic function/type based on Enum values?

- How can I use macros to reflect on struct fields?

- Can I use an enum without its enclosing namespace, as I can in C++?

- Does rust have a 'with' clause?

- How do I avoid declaring timelines on my types?

- What is an idiomatic way to implement the Strategy pattern?

- What is an idiomatic way to return a closure from a function?

...and so on. This "conversation" happened here and there over the period of two weeks. Not only was ChatGPT up to the task, but it was able to suggest what technologies would get me close to the mark if Rust wasn't built to do what I had in mind. I'm now much more comfortable and competent in the language, but miles ahead of where I would have been without it.

For really basic syntax stuff it works, but the moment you ask its advice on anything involving ChatGPT has confidently led me incredibly wrong right-sounding trails.

To their credit, the people on the Rust forum have been really responsive at answering my questions and poking holes in incorrect unsafe implementations, and it is from speaking to them that I truly feel I have learned the language well.

> That PR, would have taken me at least a couple of days and up to 2 weeks to fully manually write out and test

What is your accuracy on software development estimates? I always see these productivity claims matched again “It would’ve taken me” timelines.

But, it’s never examined if we’re good at estimating. I know I am not good at estimates.

It’s also never examined if the quality of the PR is the same as it would’ve been. Are you skipping steps and system understanding which let you go faster, but with a higher % chance of bugs? You can do that without AI and get the same speed up.

Now the question is: did you gain the same knowledge and proficiency in the codebase that you would've gained organically?

I find that when working with an LLM the difference in knowledge is the same as learning a new language. Learning to understanding another language is easier than learning to speak another language.

It's like my knowledge of C++. I can read it, and I can make modifications of existing files. But writing something from scratch without a template? That's a lot harder.

Some additional notes given the comments in the thread

* I wasn’t trying to be dismissive of the article or the study, just wanted to present a different context in which AI tools do help a lot

* It’s not just code. It also helps with a lot of tasks. For example, Claude Code figured out how to “manually” connect to the AWS cluster that hosted the source db, tested different commands via docker inside the project containers and overall helped immensely with discovery of the overall structure and infrastructure of the project

* My professional experience as a developer, has been that 80-90% of the time, results trump code quality. That’s just the projects and companies I’ve been personally involved with. Mostly saas products in which business goals are usually considered more important than the specifics of the tech stack used. This doesn’t mean that 80-90% of code is garbage, it just means that most of the time readability, maintainability and shipping are more important than DRY, clever solutions or optimizations

* I don’t know how helpful AI is or could be for things that require super clever algorithms or special data structures, or where code quality is incredibly important

* Having said that, the AI tools I’ve used can write pretty good quality code, as long as they are provided with good examples and references, and the developer is on top of properly managing the context

* Additionally, these tools are improving almost on a weekly or monthly basis. My experience with them has drastically changed even in the last 3 months

At the end of the day, AI is not magic, it’s a tool, and I as the developer, am still accountable for the code and results I’m expected to deliver

TFA was specifically about people very familiar with the project and codebase that they are working on. Your anecdots is precisely the opposite of the situation is was about, and it acknowledged the sort of process you describe.

You've missed the point of the article, which in fact agrees with your anecdote.

> It's equally common for developers to work in environments where little value is placed on understanding systems, but a lot of value is placed on quickly delivering changes that mostly work. In this context, I think that AI tools have more of an advantage. They can ingest the unfamiliar codebase faster than any human can, and can often generate changes that will essentially work.

That would be an aside, or a comment, not the point of the article.

> You've missed the point of the article

Sadly clickbait headlines like the OP, "AI slows down open source developers," spread this misinformation, ensuring that a majority of people will have the same misapprehension.

Which is a good thing for people who are currently benefiting from AI, though. The slower other programmers adopt AI, the more edge those who are proficient with it have.

It took me an embarrassingly long time to realize a simple fact: using AI well is a shallow skill that everyone can learn in days or even hours if they want. And then my small advantage of knowing AI tools will disappear. Since the realization I've been always upvoting articles that claims AI makes you less productive (like the OP).

So you bother push some sort of self proclaimed false narrative with upvotes but then you try to counteract it by spelling it out?

Well that's exactly what it does well at the moment. Boilerplate starter templates, landing pages, throwaway apps, etc. But for projects that need precision like data pipelines, security - it code generated has many subtle flaws that can/will cause giant headaches in your project unless you dig through every line produced

You clearly have not read the study. Problem is developers thought they were 20% faster, but they were actually slower. Anyway from a fast review about your profile you're in conflict of interest about vibe coding, so I will definitely take your opinion with a grain of salt.

> Anyway from a fast review about your profile you're in conflict of interest about vibe coding

Seems to happen every time, doesn't it?

How are you confident in the code, coding style and practices simply because the LLM says so. How do you know it is not hallucinating since you don't understand the codebase?

[deleted]

When anecdote and data don't align, it's usually the data that's wrong.

Not always the case, but whenever I read about these strained studies or arguments about how AI is actually making people less productive, I can't help but wonder why nearly every programmer I know, myself included, finds value in these tools. I wonder if the same thing happened with higher level programming languages where people argued, you may THINK not managing your own garbage collector will lead to more productivity but actually...

Even if we weren't more "productive", millions prefer to use these tools, so it has to count for something. And I don't need a "study" to tell me that

TFA says clearly that it is likely that AI will make more productive anyone working on an unfamiliar code base, but make less productive those working on a project they understand well, and it gives reasonable arguments for why this is likely to happen.

Moreover, it acknowledges that for programmers working in most companies the first case is much more frequent.

I have written every line of code in the code base I mostly work in and I still find it incredibly valuable. Millions use these tools and a large percentage of them find them useful in their familiar code base.

Again, overwhelming anecdote and millions of users > "study"

> Interestingly the developers predict that AI will make them faster, and continue to believe that it did make them faster, even after completing the task slower than they otherwise would!

In this case clearly anecdotes are not enough. If that quote from the article is accurate, it shows that you cannot trust the developers time perception.

I agree, its only one study and we should not take it as the final answer. It definitely justifies doing a few follow up evaluations to see if this

> If that quote from the article is accurate, it shows that you cannot trust the developers time perception.

The scientific method goes right out the window when it comes to true believers. It reminds me of weed-smokers who insist getting high makes them deep-thinkers: it feels that way in the moment, but if you've ever been a sober person caught up in a "deep" discussion among people high on THC, oh boy...

Or I cannot trust a contrived laboratory setting with it's garden of forking paths.

https://mleverything.substack.com/p/garden-of-forking-paths-...

I did not say to trust it. I do not need to trust it.

If I run my own tests on my own codebase I will definitely use some objective time measurement method and a subjective one. I really want to know if there is a big difference.

I really wonder if its just the individuals bias showing. If you are pro-AI you might overestimate one, and if you are against it you might under-estimate it.

That's fair, I agree.

> I can't help but wonder why nearly every programmer I know, myself included, finds value in these tools.

One of the more interesting findings of the study mentioned was that the LLM users, even where use of an LLM had apparently degraded their performance, tended to believe it had enhanced it. Anecdote is a _really_ bad argument against data that shows a _perception_ problem.

> Even if we weren't more "productive", millions prefer to use these tools, so it has to count for something.

I mean, on that basis, so does homeopathy.

Like, it's just one study. It's not the last word. But "my anecdotes disprove it" probably isn't a _terribly_ helpful approach.

Also, "anecdotes > data" as a general heuristic is a red flag. But like if clowns had a country and their flag were red. That kind.

I had a similar experience with AI and open source. AI allowed me to implement features in a language and stack I didn't know well. I had wanted these features for months and no one else was volunteering to implement them. I had tried to study the stack directly myself, but found the total picture to be complex and under-documented for people getting started.

Using Warp terminal (which used Claude) I was get past those barriers and achieve results that weren't happening at all before.