I guess many find comfort in being able to task an ai with assignments that it cannot complete. Most sr developers I work with take this approach. It's not really a good way of assessing the usefulness of a tool though.
I guess many find comfort in being able to task an ai with assignments that it cannot complete. Most sr developers I work with take this approach. It's not really a good way of assessing the usefulness of a tool though.
He asked what he was doing wrong?
too big of tasks. break them down and then proceed from there. have it build out task lists in a TASKS.md. review those tasks. do you agree? no? work with it to refine. implement one by one. have it add the tests. refactor after awhile as {{model}} doesn't like to do utility functions a lot. right now, about +50k lines in to a project that's vibecoded. i sit back and direct and it plays.
Imagine the CS 100 class where they ask you to make a PB&J. saying for it to make it, there's a lot of steps, but determine known the steps. implement each step. progress.
Too big and requiring too much niche specific knowledge, you somehow have to inject that knowledge and allow it to iterate.
This is the way.
I run interviews at my company. We allow/encourage AI.
The number one failure method is people throwing all of the requirements in upfront. They get one good pass then fail.
I was part of a shop that did the Pivotal Way and we had Inceptions where the PM, engineers, and a tester or two would be sequestered in a conference room for the day to bang out task lists that went into mid-level fidelity. Technical considerations were debated and sometimes in a heated way, but we never got into implementation—just structure and flow to ensure it jives.
…this reeeeaaaallllyyyy feels like that
I'm inclined to agree with this approach because someone not using AI who fails would likely fail for the same reasons. If you can't logically distill a problem into parts you can't obtain a solution.