True, but it's catching up fast. A year ago, I used AI for small OS scripts. It worked fine and saved me the time of looking up switches for commands. Now, I can ask it to create a simple game of about 200 lines, and it does a pretty good job of writing bug-free code within a few seconds. It's only going to get better. Even if the tech doesn't improve further, I can see a future where all apps are endlessly configurable.
A big part of my career has been the modification of enterprise software to fit a company's needs. Rarely was any one addition more than a few hundred lines of code. I can see a future where there will be simple options for a non-coder to add to an app.
True, it's not a coder, but that doesn't mean it won't fundamentally change how apps are made and it will reduce the number of master programmers needed. It won't replace all programmers, but it will greatly reduce the number that are needed, which country they work in and the language they use to program apps.
Programming has mainly been a career that requires the individual to understand English. That is changing. I can see a future where code can be created in multiple human languages. Programming was well-paid because relatively few people had the expertise to do it. That won't be the case, and the pay will adjust downward as needed. AI might not be a coder, but it will let many more people become coders. In the future, coding will be in the same pay range as clerical work. Companies will be hiring Programming Clerks rather than Programming Engineers.
I think you're right that LLMs are democratizing access to coding, but unless and until AI models reach a point where they can say 'no' to their users, the scenario you're imagining ('endlessly configurable apps') will probably lead to software that collapses under its own complexity.
Years ago, I supported a team of finance professionals who were largely quite competent at coding but knew nothing about software engineering. They had thousands of scripts and spreadsheets: they used version control, but kept separate long-lived branches for client-specific variations of different models. There were no tests for anything; half the tools would break when the clocks changed.
They weren't dumb, but their incentives weren't about building anything we might recognize as an engineered application. I suspect something similar will happen turning end users loose with AI.
> Programming has mainly been a career that requires the individual to understand English.
Disagree, programming is a career where in order to be good you can
1. Break down a big problem into smaller ones, creating abstractions
2. Implement those abstractions one by one to end up with a full program
3. Refactor those abstractions if requirements change or (better) reimplement an abstraction a different way
You do all of this to make complex software digestible by a human, so that they don't have to have the entire system 'in context'.
This prophesied view of software development will mean you end up with code that's likely only maintainable by the model itself.
I can't imagine the vendor lock in of that.... You have the source, but it is in such a state that no human can maintain it?
> I can't imagine the vendor lock in of that.... You have the source, but it is in such a state that no human can maintain it?
It’s much worse than that.
What happens when the erroneous output caused by model blind spots gets fed back into the model?
Those blind spots get reinforced.
Doesn’t matter how small that error rate is (and it’s not small). The errors will compound.
Vendor lock-in won’t matter because it will simply stop working/become totally unrecoverable.