> you're wondering if the answer the AI gave you is correct or something it hallucinated

Worse, more insidious, and much more likely is the model is trained on or retrieves an answer that is incorrect, biased, or only conditionally correct for some seemingly relevant but different scenario.

A nontrivial amount of content online is marketing material, that is designed to appear authoritative and which may read like (a real example) “basswood is renowned for its tonal qualities in guitars”, from a company making cheap guitars.

If we were worried about a post-truth era before, at least we had human discernment. These new capabilities abstract away our discernment.

The sneaky thing is that the things we used to rely on as signals of verification and credibility can easily be imitated.

This was always possible--an academic paper can already cite anything until someone tries to check it [1]. Now, something looking convincing can be generated more easily than something that was properly verified. The social conventions evaporate and we're left to check every reference individually.

In academic publishing, this may lead to a revision of how citations are handled. That's changed before and might certainly change again. But for the moment, it is very easy to create something that looks like it has been verified but has not been.

[1] And you can put anything you like in footnotes.