I get you are being sarcastic, but lets actually consider your idea more broadly.
- Machine code
- Assembly code
- LLVM
- C code (high level)
- VM IR (byte code)
- VHLL (e.g. Python/Javascript/etc)
So, we already have hierarchical stacks of structured text. The fact that we are extending this to higher tiers is in some sense inevitable. Instead of snark, we could genuinely explore this phenomenon.
LLMs are allowing us to extend this pattern to domains other than specifying instructions to processors.
And we re-invent the wheel basically. You have to use very specific prompts to make the computer do what you want, so why not just, you know... program it? It's not that hard.
Natural language is trying to be a new programming language, one of many, but it's the least precise one imho.
> Natural language is trying to be a new programming language, one of many, but it's the least precise one imho.
I disagree that natural language is trying to be a programming language. I disagree that being less precise is a flaw.
Consider:
- https://www.ietf.org/rfc/rfc793.txt
- https://datatracker.ietf.org/doc/html/rfc2616
I think we can agree these are both documents written in natural language. They underpin the very technology we are using to have this discussion. It doesn't matter to either of us what platform we are on, or what programming language was used to implement them. That is not a flaw.
Biological evolution shows us how far you can get with "good enough". Perfection and precision are highly overrated.
Let's imagine a wild future, one where you copy-and-paste the HTML spec (a natural language doc) into a coding agent and it writes a complete implementation of an HTML agent. Can you say with 100% certainty that this will not happen within your own lifetime?
In such a world, I would prefer to be an expert in writing specs rather than to be an expert in implementing them in a particular programming language.