Prompt engineering is mostly structured thought. Can you write a lab report? Can you describe the who, what, when, where, and why of a problem and its solution?
You can get it to work with one off commands or specific instructions, but I think that will be seen as hacks, red flags, prompt smells in the long term.
If I could do those things, I wouldn't be using an LLM to write for me, now would I?
You don’t let the LLM write prise for you, you get it to translate natural language into code somewhat coherently.
In this instance I'm assuming most of the "goblin" references were in prose rather than in source code, so the goal of this particular prompt edit was directed toward making the prose better.
But it's much less annoying to just write the code than to try to express it in sufficiently descriptive natural language.
Converse for me so ymmv.
skill issue