GitLab could be the perfect case study on AI-powered efficiency improvements. I have never interacted with a piece of software that, for every single problem I found, there was an open issue always at least 4-7 years old that was just being shuffled around by managers adding and removing random labels.

Surely with all of these ridiculous developer productivity gains enabled by AI, they should finally be able to fix all of these ancient issues quickly and clean up the backlog.

Nope, “workforce reduction” thanks to AI again. This charade is getting boring.

On the other hand, most issues rot due to process overhead, not because the ticket is hard.

For example, why are you working on a four-year-old issue, and a trivial one at that, when you're already behind schedule on the tasks assigned to you? Now someone else who has their own things to get done has to review it? And even trivial changes can be annoying to truly review beyond a blind LGTM.

Just one of the many ways that pressure builds against the utopia of burning through old tickets.

Aside, watch out for the double standard we have for AI on forums like this. AI is expected to be so good that it can magically overcome the forces that keep engineers from working on old tickets (which were never related to engineer productivity) and, when AI can't, well of course it couldn't because AI sucks.

And who knows the fix to some of these issues might be a hell of a lot more worked now that the bug has been baked in and the "real" fix is herculean now.

The reason for this is: the only way to show productivity gains enabled by AI is to lay off people and pretend you are doing the same amount of work (while in reality you are severely dropping quality and accumulating technical debt).

I think that in these cases, what they need more than more engineering or AI productivity, is good management. Close issues that get shuffled around too much as "yeah this is too vague", or "nah we can't fix this", or "you know what, fuck you I'm not doing it".

Productivity gains can also be achieved by reducing scope. The coming issues will be that because of increased productivity (idea -> working code) that software is too bloated, does too much, that product managers will and can say "yes" to everything. Until it becomes unmanageable.

And that's not a new problem, it's what basically every programming adage / wisdom going back 70 years is about.

Also, when most work is unproductive, like managers shuffling around and relabeling issues, you can remove those managers without affecting output.

Quite possibly while improving output. Managers that are gone will not require attention from developers.

Dunno how it is these days, but that reads like Android roughly 2012-2020.

I once found a looooong bug report thread on their issue tracker 7ish years old that had all the usual waves of promises that a fix might make the next release, then silence, then repeat, and the usual challenges to the bug’s status every time a release happened, plus it saw community members correctly diagnose the problem in the first couple years, then by like year 5 there’s was a (small!) patch posted by a community member with multiple posters confirming it was good and fixed the issue, that the author and others had been begging Google to apply and get in a release for a couple years. There’d been no responses from Google folks for a while.

That might be the worst one I saw, but encountering something like that was a few-times-per-year thing in my android app dev years.

I no longer care, thankfully, however for several years the Android NDK felt like it was a 20% project from someone on the team.

I'm certain that if they would start doing that, without a proper strategy / workflow when it comes to QA, it will be GitHub reloaded. You'll be able to watch the decline in real-time.

But that’s the issue the parent is highlighting, you can’t just throw AI at these problems because the bottleneck is decision making, it always is, and AI is bad at that.

So nothing really changes in terms of product development velocity, it’s just headcount reduction.

But that’s not what their own marketing strategy communicates.

I think what OP means is that these companies keep promising AI is exceptional for one thing but for some reason it's never used for that. The only visible outcome of AI in these companies is that they spend so much on it they end up laying off employees.

Has any of the companies who went all in on AI gotten better at their job because they went all in on AI?

maybe the 'microservices' approach + agentic coding (self directed agents) that have agency to pick up old tickets, open merge requests with maybe a human in the loop will fix all that.

that's the story being sold.

I don't buy it.

> I have never interacted with a piece of software that, for every single problem I found, there was an open issue always at least 4-7 years old

You have never interacted with Jira?

I'm going to be honest with you, I never even considered that the pinnacle of enterprise software would have a public issue tracker (do they?). If something doesn't work the way I expect I just accept it and move on.

Jira bugs celebrate being old enough to drink.

Honest question, why don't they just close the issue if they don't intend to fix them?

Because an enterprise customer might decide it’s a needed fix tomorrow. I’ve seen it happen - 20 year old bug on the backlog and suddenly it jumps to the front of the line.

Someone should make a browser plugin for that.

Even slop-maker-makers themselves struggle: https://github.com/anthropics/claude-code/issues

What hope slop-maker-users have then?

To be fair, any LLM project gets a lot of stupid tickets, by virtue of a) marketing to users who aren't really developers and b) bad developers being more likely to use LLMs. Both of these groups are more likely to write bogus or non-reproducible bug tickets, as well as feature requests that don't make any sense. My guess is 10% of those 10,000 open issues are actual bugs or sensible requests.

On the other hand, LLMs seem perfect for triage and finding duplicates, so it's still surprising that they've let it get this bad.

The number is much higher than 10%. They already have a quite aggressive system of closing/duplicating issues, automatically or manually.

(Source: I build tooling around Claude Code and have spent hours swimming in the GitHub issues based on downstream user feedback)