I find the inverse as well - asking a LLM to be chatty ends up with a much higher output. I've experimented with a few AI personality and telling it to be careful etc matters less than telling it to be talkative.
I find the inverse as well - asking a LLM to be chatty ends up with a much higher output. I've experimented with a few AI personality and telling it to be careful etc matters less than telling it to be talkative.