I've found that a lot of prompt engineering boils down to managing layers of emphasis. You can use caps, bold, asterisks, precede instructions with "this is critically important:", and so on. It's also often necessary to repeat important instructions a bunch of times.
How exactly you do it is often arbitrary/interchangeable, but it definitely does have an effect, and is crucial to getting LLMs to follow instructions reliably once prompts start getting longer and more complex.
Hah, yeah I'd love to know if OpenAI ran evals that were fine-grained enough to prove to themselves that putting that bit in capitals made a meaningful difference in how likely the LLM was to just provide the homework answer!
I've found that a lot of prompt engineering boils down to managing layers of emphasis. You can use caps, bold, asterisks, precede instructions with "this is critically important:", and so on. It's also often necessary to repeat important instructions a bunch of times.
How exactly you do it is often arbitrary/interchangeable, but it definitely does have an effect, and is crucial to getting LLMs to follow instructions reliably once prompts start getting longer and more complex.
Hah, yeah I'd love to know if OpenAI ran evals that were fine-grained enough to prove to themselves that putting that bit in capitals made a meaningful difference in how likely the LLM was to just provide the homework answer!
"hello world" is tokenized differently than "HELLO WORLD", so caps definitely matter.
Just wait until it only responds to **COMMAND**!