If you want this, why would you want the LLM output and not just the prompts? The prompts are faster to read and as models evolve you can get "better" blog posts out of them.

It's like being okay with reading the entirety of generated ASM after someone compiles C++.