Does it?
I use AI as a smart auto complete - I’ve tried multiple tools on multiple models and I still _regularlt_ have it dump absolute nonsense into my editor - in thr best case it’s gone on a tangent, but in the most common case it’s assumed something (often times directly contradicting what I’ve asked it to do), gone with it, and lost the plot along the way. Of course when I correct it it says “you’re right, X doesn’t exist so we need to do X”…
Has it made me faster? Yes. Had it changed engineering - not even close. There’s absolutely no world where I would trust what I’ve seen out of these tools to run in the real world even with supervision.
When you have that hair raising “am I crazy why are people touting ai” feeling, it’s good to look at their profile. Oftentimes they’re caught up in some ai play. Also it’s good to remember yc has heavy investment in gen ai so this site is heavily biased
Context is king, too: in greenfield startups where you care little about maintenance and can accept redundant front end frameworks and backend languages? I believe agent swarms can poop out a lot lot lot of code relatively quick… Copy and paste is faster though. Downloading a repo is very quick.
In startups I’ve competed against companies with 10x and 100x the resources and manpower on the same systems we were building. The amount of code they theoretically could push wasn’t helping them, they were locked to the code they actually had shipped and were in a downwards hiring spiral because of it.
Here’s the thing - an awful lot of it doesn’t even compile/run, never mind do the right thing. My most recent example was asking it to use terraform to run an azure container app with an environment variable in an existing app environment. It repeatedly made up where the environment block goes, and and cursor kept putting the actual resource in random places in the file.