Common business-oriented language (COBOL) is a high-level, English-like, compiled programming language.
COBOL's promise was that it was human-like text, so we wouldn't need programmers anymore.
The problem is that the average person doesn't know how what their actual problems are in sufficient detail to get a working solution. When you get down to breaking down that problem... you become a programmer.
The main lesson of COBOL is that it isn't the computer interface/language that necessitates a programmer.
Agreed, the programmer is not going away. However, I expect the role is going to change dramatically and the SDLC is going to have to adapt. The programmer used to be the non-deterministic function creating the deterministic code. Along with that were multiple levels of testing from unit to acceptance in order to come to some close alignment with what the end-user actually intended as their project goals. Now the programmer is using the probabilistic AI to generate definitive tests so that it can then non-deterministically create deterministic code to pass those tests. All to meet the indefinite project goals defined by the end-user. Or is there going to be another change in role where the project manager is the one using the AI to write the tests since they have a closer relationship to the customer and the programmer is the one responsible for wrangling the code to validate against those tests.
I predict the main democratization change is going to be how easy people can make plumbing that doesn't require--or at least not obviously require--such specificity or mental-modeling of the business domain.
For example, "Generate me some repeatable code to ask system X for data about Y, pull out value Z, and submit it to system W."
What happens when value Z is not >= X? What happens when value Z doesn't exist, but values J and K do? What should be done when...
I hear what you're saying, but I think it's going to be entertaining watching people go "I guess this is why we paid Bob all of that money all those years".
This seems needlessly nitpicky. Of course there will be edge cases, there always are in everything, so pointing out that edge cases may exist isn't helpful.
But it stands to reason that would be a huge shift if a system accessible to non-technical users could mostly handle those edge cases, even when "handle" means failing silently without taking the entire thing down, or simply raising them for human intervention via Slack message or email or a dashboard or something.
And Bob's still going to get paid a lot of money he'll just be doing stuff that's more useful than figuring out how negative numbers should be parsed in the ETL pipeline.
Hence the "not obviously require" bit: Some portion of those "simply gluing things together" will not actually be simple in truth. It'll work for a time until errors come to a head, then suddenly they'll need a professional to rip out the LLM asbestos and rework it properly.
That said, we should not underestimate the ability of companies to limp along with something broken and buggy, especially when they're being told there's no budget to fix it. (True even before LLMs.)
> when value Z is not >= X?
Is your AI not even doing try/catch statements, what century are you in?
> The problem is that the average person doesn't know how what their actual problems are in sufficient detail to get a working solution. When you get down to breaking down that problem... you become a programmer.
Agreed. I've spent the last few years building an EMR at an actual agency and the idea that users know what they want and can articulate it to a degree that won't require ANY technical decisions is pure fantasy in my experience.
Right now with agents this is definitely going to continue to be the case. That said, at the end of the day engineers work with stakeholders to come up with a solution. I see no reason why an agent couldn't perform this role in the future. I say this as someone who is excited but at the same time terrified of this future and what it means to our field.
I don't think we'll get their by scaling current techniques (Dario disagrees, and he's far more qualified albeit biased). I feel that current models are missing critical thinking skills that I feel you need to fully take on this role.
> I don't think we'll get their by scaling current techniques (Dario disagrees, and he's far more qualified albeit biased).
If Opus 4.6 had 100M context, 100x higher throughput and latency, and 100x cheaper $/token, we'd be much closer. We'd still need to supervise it, but it could do a whole lot more just by virtue of more I/O.
Of course, whether scaling everything by 100x is possible given current techniques is arguable in itself.
> I see no reason why an agent couldn't perform this role in the future.
Yea, we'll see. I didn't think they'd come this far, but they have. Though, the cracks I still see seem to be more or less just how LLMs work.
It's really hard to accurately assess this given how much I have at stake.
> and he's far more qualified albeit biased
Yea, I think biased is an understatement. And he's working on a very specific product. How much can any one person really understand the entire industry or the scope of all it's work? He's worked at Google and OpenAi. Not exactly examples of your standard line-of-business software building.