I tried getting the ai to write the tests. It created placeholders that contained no code but returned a success.
Seems like QA is the new prompt engineering
I tried getting the ai to write the tests. It created placeholders that contained no code but returned a success.
Seems like QA is the new prompt engineering
Placeholder tests are a vague objective problem. "Write tests" leaves too much room. The model satisfies the surface request, not the intent.
Explicit typed instructions close that gap: objective = "write tests that verify behavior X", constraints = "no placeholder returns, every assertion must check a real value", output_format = "one describe block per function". With those in separate blocks the model has nowhere to hide.
I built flompt for structuring prompts like this: https://github.com/Nyrok/flompt