In the past I've had GPT4 output references with valid DOIs. Problem was the DOIs were for completely different (and unrelated) works. So you'd need to retrieve the canonical title and authors for the DOI and cross check it.
In the past I've had GPT4 output references with valid DOIs. Problem was the DOIs were for completely different (and unrelated) works. So you'd need to retrieve the canonical title and authors for the DOI and cross check it.
A classic case.
I work on Veracity https://groundedai.company/veracity/ which does citation checking for academic publishers. I see stuff like this all the time in paper submissions. Publishers are inundated
And then make sure the arguments and evidence it presents are as the LLM represented them to be.
At which point it’s more of a hassle to use an LLM than not.