Hard disagree. LLMs are fantastic for fixing bad architecture that's been around for a decade because nobody was willing to touch it. I can have it write tons and tons of sanity checks and then have it rewrite functionality piece by piece with far more verification than what I'd get from most engineers.
It's not immediate, it still takes weeks if you want to actually do QA and roll out to prod, but it's definitely better than the pre-LLM alternatives.
Yeah but you care which is my exact point.
How is this different from every single technological iteration?
Because there is a certain point where barrier to entry prevents meaningful competition once winner-take-all power laws start kicking in, and stability hitherto has been predisposed on having a plurality of non interrelated competitors to ensure no one man's quirks drives too much of societies theoretical output.
AI will make this dynamic worse, and it's got the extra danger of the default banal way of applying the technology in fact encourages it's application to that end.
I don't really see it that way because most software companies overestimate the importance of fantastic software vs merely adequate software, and most times good sales development, support, and negotiation skills are what helps actually sell.
I also don't think that the commodification of programming is a substitute for things like understanding your customers, having good taste for design, and designing software in a way that is maximally iterable.