Hmm, I tried some questions which would have been relevant to me in the recent past, and it flubbed pretty hard on all of them. Even worse, instead of just saying it doesn't know it generates semi-plausible babbling or adjacent but not actually helpful knowledge.
I decided to give it another go and ask GPT-4 three questions which I needed to get answers to within the last few months.
Asking conceptually about DPO: https://chat.openai.com/share/6611454c-60de-4317-811b-2b7f31... - In this one it completely leaves out the actual trick which enables DPO, so I would say it has almost no information content. Someone who didn't know what DPO is and read this would incorrectly think that they had learned something. - To learn about this, the right place was to read the original DPO paper, and some follow-up work
Asking about FSDP compatibility with LoRA: https://chat.openai.com/share/5f8892ea-61e6-496f-abda-d5a8ad... - In this one it just says a bunch of generic vague things without answering the question. - The right place to learn the answer to this is diving through Github issue comments
Asking for details the MegaBlocks mixture-of-experts setup: https://chat.openai.com/share/c010e630-ba08-407e-afb3-03df99... - Again it's just saying generic stuff which is relevant to mixture-of-experts in general, but it leaves out everything that actually makes the MegaBlocks MoE different from a generic MoE idea - For this one I had to do a combination of reading the paper and the MegaBlocks repo
So 0/3 and pretty dramatically. I was actually expecting it to get at least one of those. As far as I can tell, it didn't really do anything different based on me specifying my background either. I'd love to see any links to productive conversations that people can share.