I’m a bit curious with these takes. Arguing in good faith - is the general assumption that people who use AI/agents/harnesses don’t ship features? We’ve been all in Claude Code since ~Septemberish, and have been able to successfully track the boost. Like the features that we ship that get used in production. Both from infrastructure side, and business logic implementations. Frontend and backend.

I don’t think people are wasting too much time. Although, I do agree most of these posts are just bs, including this one. But AI-development has been a thing across a lot of companies in the world.

You're replying to an account specifically created to post inflammable AI takes (likely a bot anyway). So your attempt

> Arguing in good faith

will be futile, unfortunately.

I can take on a slightly weaker form in good faith: professionally it’s a non-starter until private, open source inference can be self-hosted and the ROI is clear enough to invest in that.

And on the ROI side, trying things out regularly, I haven’t seen the positive ROI in the limited time I’ve dedicated to exploring the tools. I’ve restricted experimenting to 4 hours per month, because spending more than 2.5% of the month chasing productivity improvements that realistically seem to be 10-20%, will quickly eat into those gains. After accounting for token costs, it ends up being a wash.

"I studied math 4 hours per month and I can confirm that mathematics is stupid"

You can't learn how to use _anything_ by experimenting 4 hours a month.

The poster provided numbers and thresholds they used to evaluate the utility of a business product.

With infinite time anything is possible, but since we live within constraints, discussing practical, real world thresholds or evaluation methods is a worthwhile use of our time.

Ignore the people who haven't found out how to use ai yet or don't want to.

AI is a powerful tool. Depending on what I need I use chatgpt, in-ide agents, or a platform like Devin.ai.

I use it when it helps me advance my goals. I don't when it doesn't. Sometimes it misses the mark and I scale back and have it do a specific piece and I'll do the rest.

Sometimes I use it to analyze the code base in seconds vs minutes. Sometimes I use it to pinpoint a bug fast.

Ive solved customer issues in seconds and minutes with it vs hours.

I worked on a banking app with deeply domain specific data issues. AI was not very helpful on that team. My current work on consumer web apps mean my problems are more mundane and AI is a big accelerant.

Being and engineer means solving the problems with the right tools with the right tradeoffs as well. It's why I use an idea vs notepad, I use chatgpt for one-off scripts and "chat", and i use agentic workflows for big, repetitive, or "boring" low-stakes tasks.

> have been able to successfully track the boost.

lets get nitty gritty on this - can you say how you did this? because a lot of people think this is an unsolved problem

For my team, it has been easy. We deal with infrastructure for the entire org, so have tickets created for every request. We also gave our own backlog for internal project, so can see burn rate, and etc. Team hasn’t changed, a lot of similar/same tasks that have taken half a day has been completely automated to a point where we just do PR review after an initial ticket is created by other teams.

There are a lot of little things we’ve tracked, and it’s just faster to implement things now. To be fair, everyone on my team has decade+ professional experience (many more non-prodessional), and we understand limitations of AI fairly well.

What kind of code is infrastructure in this context? Devops in a software company? Internal tooling in a software org?

What is your definition of faster to implement? Is it producing a plausible implementation, or is it faster at producing a correct and high quality implementation? Are you including time spent refactoring and fixing bugs in your metrics? If not, I think you are tracking a gut feeling rather than cold hard facts. I’m not saying this is easy to track, just saying that it’s hard to know for sure that you are really more productive with AI.

Thank you for sharing any info at all.

> to be fair, everyone on my team has decade+ professional experience (many more non-prodessional), and we understand limitations of AI fairly well.

I see this appear quite often in discussions on productivity, to the point that a conclusion may be made regarding its centrality for productivity gains.

Not the same person, but it really depends on projects. E.g. I have some projects that involve working to large specification sets where we can measure rate of delivery against the spec. If your spec is fuzzy and incomplete, then it gets hard, but then you have little insight into human productivity for those projects either.

[deleted]