Its not something an LLM would do with simple chatGPT type prompts, but it's something I can imagine building an agentic system to do. It's not trivial but seems feasible with current day LLMs.
If you design the system to have this exact quality (among many others), where clues are dropped earlier in the quest line for later quests. It's a matter of breaking up the prompts and iteratively refining the outputs.