Here's for the AI supremacists:
Let's assume AI is 10x perfect than humnas in accuracy and produces 10x less bugs and increases the speed by 1000x compared to a very capable software engineer.
Now imagine this: A car travels at a road that has 10x more bumps but it is traveling 1000x slower pace so even though there are 10x bumps, your ride will feel less bumpy because you're encountering them at far lower pace.
Now imagine a road that has 10x less bumps on the road but you're traveling at 1000x the speed. Your ride would be lot more bumpy.
That's the agentic coding for you. Your ride would be a lot more painful. There's lots of denial around that but as time progresses it'll be very hard to deny.
Lastly - vibe coding is honest but agentic coding is snake oil [0] and these arguments about having harnesses that have dozens of memory, agent and skill files with rules sprinkled in them pages and pages of them is absolutely wrong as well. Such paradigm assumes that LLMs are perfect reliable super accurate rule followers and only problem as industry that we have is not being able to specify enough rules clearly enough.
Such a belief could only be held by someone who hasn't worked with LLMs long enough or is a totally non technical person not knowledgeable enough to know how LLMs work but holding on to such wrong belief system by highly technical community is highly regrettable.
I will 100% agree with this. It just feels very scary to see entire teams completely handing off all coding needs and testing needs and also design needs for that matter, to AI. This not only makes people lose their touch but also allows them to push insane amounts of code every day. PRs get impossible to review for humans because they are too huge and they add too much burden so they unsurprisingly use AI to review those things again. And with the amount of code churn, nobody knows what exactly is being implemented. And I have seen first hand that as the size of the code base grows, tracing problems and actually debugging things when things go wrong gets incredibly rough and complex.
And AI that has been helping all this time will suddenly stop helping out with this one use case. I have experienced AI running in circles, in this case trying to find a root cause. It failed, and the user is left holding the bag. That is when you feel like you have just been dropped into a vast ocean without a lifeboat. Then you'll have to just start looking through those massive chunks of vibe-coded crap to understand what is going on.
AI is good in terms of improving speed, but I am afraid we are massively taking it the wrong way as engineers. Everyone is just letting it go on autopilot and make it do things completely from start to end. The ideal solution lies where every piece of code it writes is reviewed by authors, and they make sure they are not checking in crazy stuff day in and day out.
You are speaking out of my soul. Thank you. Great example. I have grinded AI extensively 14 hours a day on my own project for months. I’ve been using AI since GPT-2.
I maxxed out Claude Max $200 subscription and before I justified spending $100/day.
And it was worth it, but not because it wrote me so good code, but because I learnt the lessons of software engineering fast. I had the exact ride you are describing. My software was incredible broken.
Now I see all the cracks, lies and "barking the wrong tree" issues clearly.
NOW i treat it as an untrustworyth search engine for domains I’m behind at. I also use predict next edit and auto-complete, but I don’t let AI do any edit on my codebase anymore.