I think one-shotting a full feature is a losing battle today. You have to figure out the best step size for the situation and prompt using components of that size.
My prompts usually resemble actions I could tell a college student — they just have a better understanding of concepts and professional lingo.
The benefit of this approach is that you know the code fairly well. You are staying with the LLM in developing a deeper understanding of the code you'll ultimately create a PR for. Then when there's an incident, you have enough deep knowledge of the code that you can be tactical.
I have found that until I trust AI to develop the code unsupervised, I have to have an equally good mental model of everything AI makes.