Cant wait for everyone to realize they've wasted a year + messing with agents and experiencing a feeling of psuedo productivity.

I can understand skepticism to a degree, and even fundamentally believing that AI is bad for all sorts of reasons, but I am becoming more and more perplexed at the certainty behind statements like this one. How are you so certain that AI development is this doomed? It just hasn't matched my experience at all, and I wonder what your experience is that has driven you to this level of certainty about the certain doom of AI coding?

Is it just a philosophical belief that AI is morally bad? Or have you actually used AI to build things and feel confident that you have explored the space enough to come to such a strong conclusion?

I have been writing code every day for over 30 years, and have been doing it professionally for over 20. I have seen fads come and go, and I have seen real developments that have changed the way I do what I do numerous times. The more experience and the more projects I create with AI, the more certain I am that this is a lasting and fundamental change to how we produce software, and how we use computers generally. I have seen AI get better, and I have seen myself get more proficient at using it to get real work done, work that has already been tested with real world, production, workloads.

You can hate that it is happening, and hate the way working with AI feels, but that doesn't mean it is not providing real value for people and doing real work.

[deleted]

I’m a bit curious with these takes. Arguing in good faith - is the general assumption that people who use AI/agents/harnesses don’t ship features? We’ve been all in Claude Code since ~Septemberish, and have been able to successfully track the boost. Like the features that we ship that get used in production. Both from infrastructure side, and business logic implementations. Frontend and backend.

I don’t think people are wasting too much time. Although, I do agree most of these posts are just bs, including this one. But AI-development has been a thing across a lot of companies in the world.

You're replying to an account specifically created to post inflammable AI takes (likely a bot anyway). So your attempt

> Arguing in good faith

will be futile, unfortunately.

I can take on a slightly weaker form in good faith: professionally it’s a non-starter until private, open source inference can be self-hosted and the ROI is clear enough to invest in that.

And on the ROI side, trying things out regularly, I haven’t seen the positive ROI in the limited time I’ve dedicated to exploring the tools. I’ve restricted experimenting to 4 hours per month, because spending more than 2.5% of the month chasing productivity improvements that realistically seem to be 10-20%, will quickly eat into those gains. After accounting for token costs, it ends up being a wash.

"I studied math 4 hours per month and I can confirm that mathematics is stupid"

You can't learn how to use _anything_ by experimenting 4 hours a month.

The poster provided numbers and thresholds they used to evaluate the utility of a business product.

With infinite time anything is possible, but since we live within constraints, discussing practical, real world thresholds or evaluation methods is a worthwhile use of our time.

Ignore the people who haven't found out how to use ai yet or don't want to.

AI is a powerful tool. Depending on what I need I use chatgpt, in-ide agents, or a platform like Devin.ai.

I use it when it helps me advance my goals. I don't when it doesn't. Sometimes it misses the mark and I scale back and have it do a specific piece and I'll do the rest.

Sometimes I use it to analyze the code base in seconds vs minutes. Sometimes I use it to pinpoint a bug fast.

Ive solved customer issues in seconds and minutes with it vs hours.

I worked on a banking app with deeply domain specific data issues. AI was not very helpful on that team. My current work on consumer web apps mean my problems are more mundane and AI is a big accelerant.

Being and engineer means solving the problems with the right tools with the right tradeoffs as well. It's why I use an idea vs notepad, I use chatgpt for one-off scripts and "chat", and i use agentic workflows for big, repetitive, or "boring" low-stakes tasks.

> have been able to successfully track the boost.

lets get nitty gritty on this - can you say how you did this? because a lot of people think this is an unsolved problem

For my team, it has been easy. We deal with infrastructure for the entire org, so have tickets created for every request. We also gave our own backlog for internal project, so can see burn rate, and etc. Team hasn’t changed, a lot of similar/same tasks that have taken half a day has been completely automated to a point where we just do PR review after an initial ticket is created by other teams.

There are a lot of little things we’ve tracked, and it’s just faster to implement things now. To be fair, everyone on my team has decade+ professional experience (many more non-prodessional), and we understand limitations of AI fairly well.

What kind of code is infrastructure in this context? Devops in a software company? Internal tooling in a software org?

What is your definition of faster to implement? Is it producing a plausible implementation, or is it faster at producing a correct and high quality implementation? Are you including time spent refactoring and fixing bugs in your metrics? If not, I think you are tracking a gut feeling rather than cold hard facts. I’m not saying this is easy to track, just saying that it’s hard to know for sure that you are really more productive with AI.

Thank you for sharing any info at all.

> to be fair, everyone on my team has decade+ professional experience (many more non-prodessional), and we understand limitations of AI fairly well.

I see this appear quite often in discussions on productivity, to the point that a conclusion may be made regarding its centrality for productivity gains.

Not the same person, but it really depends on projects. E.g. I have some projects that involve working to large specification sets where we can measure rate of delivery against the spec. If your spec is fuzzy and incomplete, then it gets hard, but then you have little insight into human productivity for those projects either.

[deleted]

Right, just like all the productivity lost when people stopped using paper ledgers to mess around with these so-called 'databases'

I work on projects where we measure the output. There's nothing "pseudo" about it.

Tell me, what do you measure? Changes shipped? Lines of code? Customer satisfaction? Defect rate? MTTR? New engineer onboarding time/TTFC? Security/compliance audit turnaround time? Uptime? Employee retention? Rollback/forward-fix rates? Linter errors? Test coverage? Meaningful test coverage?

Depends on the clients maturity, but some places all of the above.

What are the numbers you are getting?

if i wanted to find out the answer to my question, i would need to:

- open the browser

- google "john repo"

- find the website

- copy the repo name

- open the terminal

- cd

- git clone

- try to find the file i want

- read the whole file to find the answer

= answer

i now do:

- "john repo question" = answer

i treat it like Minecraft automation - it's just for funsies and to pass the time haha

I don't think agentic workflows are there yet, but implementing skills to manually call and use while working side by side with an AI is definitely nice - our company is focused a lot on sandboxing right now and having safe skills

I don't think we've gotten feature development well yet, but the review skills + grafana skills they wrote have been pretty solid

[dead]

Trick is to not burn too much time worrying about the perfect skills and this and that. See a lot of people filling skills with LLM junk, or overdoing rules that start confusing the LLM. Just try Vanilla, see something you don't like? Then you make a skill and funnel the LLM to use it for the style of task it's working on. E.g. database work is a mixed bag with LLMs, they tend to do work in totally different styles if you leave them unconstrained.

Agents are unbelievably useful at helping takeover and refactor messy codebases though. I just started taking over this monstrous nightmare of a codebase, truly ancient code the bulk of it written over 10+ years ago in PHP. With the use of Claude / Codex I was able to port over the vast majority of the existing legacy storefront and laid the groundwork for centralizing the 10-20k LOC mega-controller logic over to reusable repo/service patterns.

Just shit that would've taking years previously, is achievable in under a month.

This.

Everything needs an element of human touch, I would somehow only run vanilla things. But if, let’s say, I’m creating backup scripts, I meticulously outline the plan.

I couldn't agree more, just because I know I already wasted months and pulled the plug :D

I'm sure lots of people felt this way about steam power too.

This will be another Microservices moment in our industry.

You haven’t made money from their use yet?

They will lie to themselves and deny it.

You’ll get downvoted for this hearsay!

I think you mean heresy. But maybe I don't get the reference you're making when you say hearsay

I'm wondering if there are anti-ai bots trolling the boards. Look at all the usernames of the negative AI posts.

Or maybe the only people left opposing AI are so hardcore against it they form their identity (username) around it

ok bot403

Hearsay is a rumor or something that can't be verified.

I'm aware.