You can direct LLMs to do test-driven development, though. Write several tests, then make sure the code matches it. And also make sure the agent organizes the code correctly.
You can direct LLMs to do test-driven development, though. Write several tests, then make sure the code matches it. And also make sure the agent organizes the code correctly.
The LLM obliges and writes a lot of useless tests. I have asked devs to delete several tests in the last day alone.
"I don't trust this giant statistical model to generate correct code, so to fix it, I'm going to have this giant statistical model generate more code to confirm that the other code it generated is correct."
I swear I'm living through mass hysteria.
A lot of times the act of specifying test criteria prevents developers from accidentally vibe coding themselves into a bad implementation. You can then read the tests and verify that it does what you want it to. You can read the code!
I’m not saying that it’s all hunky dory, but you use AI for straight up test driven development to catch edge cases and correct sloppy implementations before they even get coded by your giant chaos machine.
Well, yeah, you don't just make it bang out a bunch of useless code without monitoring it.
You instruct it to write the code you want to be written. You still have to know how to develop, it just makes you faster.