All of this agreed.
Now in the age of AI, many students entering into CS need do this NOW, otherwise any answer they come up in the interview, will be assumed that it was from an AI and they need to show that they something useful came out of their blogpost or research.
It is what it now means to know how to experiment, understand and build knowledge, rather than spitting out the answer because it it from stack overflow or ChatGPT.
The mistakes are raw, all the learnings in a blog post which is what makes us human yet, 90% of candidates do not do this which is why most of them cannot explain an AI's mistakes an interview if they use it.
I actually really like this idea. I've often found it odd we don't show off reports or how we run experiments during interviews. Certainly this has far greater influence over your aptitude than leetcode.
I have a growing concern that people do not see mistakes. This seems to be a bigger divide than "uses AI to code" vs "doesn't".