Hhmm I think that's more difficult than using these tools for creating software. If generated software doesn't compile, or does the wrong thing, you know there's an issue. Whereas if the LLM gives you seemingly accurate information that is actually wrong, you have no way of verifying it, other than with a human domain expert. The tech is not reliable enough for either task yet, but software is easy to verify, whereas general information is not.