I’m not sure I would agree. By the time you’ve written a full spec for it, you may as well have just written a high level programming language anyway. You can make assumptions that minimise the spec needed… but also programming APIs can have defaults so that’s no advantage.

I’d suggest that the Python code for your example prompt with reasonable defaults is not actually that far from the prompt itself in terms of the time necessary to write it.

However, add tricky details like how you want to handle connection pooling, differing retry strategies, short circuiting based on one of the results, business logic in the data combination step, and suddenly you’ve got a whole design doc in your prompt and you need a senior engineer with good written comms skills to get it to work.

> I’m not sure I would agree. By the time you’ve written a full spec for it, you may as well have just written a high level programming language anyway.

Remember all those attempts to transform UML into code back in the day? This sounds sorta like that. I’m not a total genai naysayer but definitely in the “cautiously curious” camp.

Absolutely, we've tried lots of ways to formalise software specification and remove or minimise the amount of coding, and almost none of it has stuck other than creating high level languages and better code-level abstractions.

I think generative AI is already a "really good autocomplete" and will get better in that respect, I can even see it generating good starting points, but I don't think in its current form it will replace the act of programming.

Thanks. I view your comment as orthogonal to mine, because I didn't say anything about how easy or hard it would be for human beings to specify the problems that must be solved. Some problems may be easy to specify, others may be hard.

I feel we're looking at the need for a measure of the computational complexity of problem specifications -- something like Kolmogorov complexity, i.e., minimum number of bits required, but for specifying instead of solving problems.

Apologies, I guess I agree with your sentiment but disagree with the example you gave as I don't think it's well specified, and my more general point is that there isn't an effective specification, which means that in practice there isn't a clear reward function. If we can get the clear specification, which we probably can do proportionally to the complexity of the problem, and not getting very far up the complexity curve, then I would agree we can get the good reward function.

> the example you gave

Ah, got it. I was just trying to keep my comment short!

Yeah, an LLM applied to converting design docs to programs seems like, essentially, the invention of an extremely high level programming language. Specifying the behavior of the program in sufficient detail is… why we have programming languages.

There’s the task of writing syntax, which is the mechanical overhead of the task of telling the computer what to do. People should focus on the latter (too much code is a symptom of insufficient automation or abstraction). Thankfully lots of people have CS degrees, not “syntax studies” degrees, right?

How about you want to solve sudoku say.And you simply specify that you want the output to have unique numbers in each row, unique numbers in each column, and no unique number in any 3x3 grid.

I feel like this is a very different type of programming, even if in some cases it would wind up being the same thing.