An important part of using an LLM is to verify it's output, because they are very prone to just make stuff up. If you focus on what you don't understand, how do you verify the output?