Pretty much, the article assumes people didn't build the wrong thing before AI. Except that did happen all the time and it just happened slower, took longer to realize that it was the wrong thing and then building the right thing took longer.
It's funny, because you could actually take that story and use it to market AI.
> I once watched a team spend six weeks building a feature based on a Slack message from a sales rep who paraphrased what a prospect maybe said on a call. Six weeks.
Except now with AI it takes one engineer 6 hours, people realize it's the wrong thing and move on. If anything, I would say it helps prove the point that typing faster _does_ help.
Sometimes being involved in the construction process allows you to discover all the (many, overlapping) ways it's the "wrong thing" sooner.
In the long term, some of the most expensive wrong-things are the ones where the prototype gets a "looks good to me" from users, and it turns out what they were asking for was not what they needed or what could work, for reasons that aren't visually apparent.
In other words, it's important to have many people look at it from many perspectives, and optimizing for the end-user/tester perspective at the expense of the inner-working/developer perspective might backfire. Especially when the first group knows something is wrong, but the second group doesn't have a clue why it's happening or how to fix it. Worse still if every day feels like learning a new external codebase (re-)written by (LLM) strangers.