Tests are only rigorous if the correct intent is encoded in them. Perfectly working software can be wrong if the intent was inferred incorrectly. I leverage BDD heavily, and there a lot of little details it's possible to misinterpret going from spec -> code. If the spec was sufficient to fully specify the program, it would be the program, so there's lots of room for error in the transformation.

Then I disagree with you

> You still have to have a human who knows the system to validate that the thing that was built matches the intent of the spec.

You don't need a human who knows the system to validate it if you trust the LLM to do the scenario testing correctly. And from my experience, it is very trustable in these aspects.

Can you detail a scenario by which an LLM can get the scenario wrong?

I do not trust the LLM to do it correctly. We do not have the same experience with them, and should not assume everyone does. To me, your question makes no sense to ask.

We should be able to measure this. I think verifying things is something an llm can do better than a human.

You and I disagree on this specific point.

Edit: I find your comment a bit distasteful. If you can provide a scenario where it can get it incorrect, that’s a good discussion point. I don’t see many places where LLMs can’t verify as good as humans. If I developed a new business logic like - users from country X should not be able to use this feature - LLM can very easily verify this by generating its own sample api call and checking the response.

The whole point is that you can't 100% trust the LLM to infer your intent with accuracy from lossy natural language. Having it write tests doesn't change this, it's only asserting that its view of what you want is internally consistent, it is still just as likely to be an incorrect interpretation of your intent.

Have you worked in software long? I've been in eng for almost 30 years, started in EE. Can confidently say you can't trust the humans either. SWEs have been wrong over and over. No reason to listen now.

Just a few years ago code gen LLMs were impossible to SWEs. In the 00s SWEs were certain no business would trust their data to the cloud.

OS and browsers are bloated messes, insecure to the core. Web apps are similarly just giant string mangling disasters.

SWEs have memorized endless amount of nonsense about their role to keep their jobs. You all have tons to say about software but little idea what's salient and just memorized nonsense parroted on the job all the time.

Most SWEs are engaged in labor role-play, there to earn nation state scrip for food/shelter.

I look forward to the end of the most inane era of human "engineering" ever.

Everything software can be whittled down to geometry generation and presentation, even text. End users can label outputs mechanical turk style and apply whatever syntax they want, while the machine itself handles arithemtic and Boolean logic against memory, and syncs output to the display.

All the linguist gibberish in the typical software stack will be compressed[1] away, all the SWE middlemen unemployed.

Rotary phone assembly workers have a support group for you all.

[1] https://arxiv.org/abs/2309.10668

The whole point is that you can't 100% trust the LLM to infer your intent with accuracy from lossy natural language.

Then it seems like the only workable solution from your perspective is a solo member team working on a product they came up with. Because as soon as there's more than one person on something, they have to use "lossy natural language" to communicate it between themselves.

Coworkers are absolutely an ongoing point of friction everywhere :)

On the plus side, IMO nonverbal cues make it way easier to tell when a human doesn't understand things than an agent.

>> The whole point is that you can't 100% trust the LLM to infer your intent with accuracy from lossy natural language.

You can't 100% trust a human either.

But, as with self-driving, the LLM simply needs to be better. It does not need to be perfect.

> You can't 100% trust a human either.

We do have a system of checks and balances that does a reasonable job of it. Not everyone in position of power is willing to burn their reputation and land in jail. You don't check the food at the restaurant for poison, nor check the gas in your tank if it's ok. But you would if the cook or the gas manufacturer was as reliable as current LLMs.

Good analogy

> If the spec was sufficient to fully specify the program, it would be the program

Very salient concept in regards to LLM's and the idea that one can encode a program one wishes to see output in natural English language input. There's lots of room for error in all of these LLM transformations for same reason.