Why isn't there a single study that would back up your observations? The only study with a representative experimental design that I know about is the METR study and it showed the opposite. Every study citing significant productivity improvements that I've seen is either:

- relying on self-assessments from developers about how much time they think they saved, or

- using useless metrics like lines of code produced or PRs opened, or

- timing developers on toy programming assignments like implementing a basic HTTP server that aren't representative of the real world.

Why is it that any time I ask people to provide examples of high quality software projects that were predominantly LLM-generated (with video evidence to document the process and allow us to judge the velocity), nobody ever answers the call? Would you like to change that?

My sense is that weaker developers and especially weaker leaders are easily impressed and fascinated by substandard results :)

Everything Claude does is reviewed by me, nothing enters the code base that doesn’t meet the standard we’ve always kept. Perhaps I’m sub standard and weak but my software is stable, my customers are happy, and I’m delivering value to them quicker than I was previously.

I don’t know how you could effectively study such a thing, that avenue seems like a dead end. The truth will become obvious in time.