> For your second example: treat interactions with LLMs as an ongoing conversation, don't expect them to give you exactly what you want first time. Here the thing to do next is a follow-up prompt where you say "number eight looked like zero, fix that".
Personally, I treat those sort of mistakes as "misunderstandings" where I wasn't clear enough with my first prompt, so instead of adding another message (and increasing context further, making the responses worse by each message), I rewrite my first one to be clearer about that thing, and regenerate the assistant message.
Basically, if the LLM cannot one-shot it, you weren't clear enough, and if you go beyond the total of two messages, be prepared for the quality of responses to really sink fast. Even by the second assistant message, you can tell it's having an harder time keeping up with everything. Many models brag about their long contexts, but I still feel like the quality of responses to be a lot worse even once you reach 10% of the "maximum context".
You also need to state your background somehow and at what level you want the answer to be. I often found LLM would give answer that what I ask is too complex and would take months to do. Then you have to say like ignore these constraints and assume I am already an expert in the field, outline a plan how to achieve this and that. Then drill down on the plan points. It's a bit of work, but its fascinating.
Or it would say to do X it involves very complex math, instead you could (and proceeds with stripped down solution that doesn't meet goals). So you can tell it to ignore the concerns about complexity and assume that I understand all of it and it is easy to me. Then it goes on creating the solution that actually has legs. But you need to refine it further.