I bet, since this has been posted, someone here has already vibe coded a reference checker that they plan to put behind a subscription.

This is good for reference checking, but I doubt this will do much for the most likely shoddy science that accompanies hallucinated references.

The frontier LLMs are getting pretty good at checking this sort of thing. You could prompt them to not only verify the references are real but that they actually state what the article claims. Some human review will still be needed but I'll bet this approach could find a lot of academic fraud.

Your approach is good for catching stuff that human reviewers might miss not as a first line default-only unit. The whole reason this is happening is because humans are not doing their job. Your solution (humans not doing their job) is just increasing the scope of the problem.

why is the standard response to "this tech isn't reliable enough for this" to run its output through the same unreliable tech?

The device-fixer started breaking devices instead of fixing them. Tell it to fix itself!

Yeah...

The amount of people who confidently tell on themselves in these discussions continues to bum me out.

why is the standard response when someone comes down with a serious illness to bring them into a facility where serious illnesses spread readily?

sometimes the presently available solutions are subpar. people go with what's available. it's not ideal, but it is practical.

> The frontier LLMs are getting pretty good at checking this sort of thing.

No, this is career ending high stakes. it requires old school "actually check a record of reality" type methods, like a database query or http get to one of the many services that hold this info.

I think they're saying that frontier LLMs may be usable to spot citations that are correct by shape (a real citation) but incorrect by usage (unrelated to the text)

I kind of hate the idea, but you probably could do a lazy LLM check of every paper and every citation and have it flag possible wrong (second sense) citations for human review

But you'd need a LOT of tokens and a LOT of human-hours

> have it flag possible wrong (second sense) citations for human review

And then what, we're done? How have we avoided the need for the same exhaustive human review? It only saves human review time if you trust the LLM not to miss things.

If the goal is to review every citation fully with 100% accuracy, then, sure, exhaustive human review is needed. But I suspect human review of a random sample would add value, catching some fraud, missing others, but having zero false positives (or as close to zero as human review can get).

An LLM could replace the random sampling. It doesn't need to be particularly good for the approach to provide value. I would worry about LLM bias though.

Another thing to consider is that readers can detect fake citations after publication, report to arXiv, and the author gets banned.

LLMs can make tool calls to do database and http queries to search for, buy, and cross reference a citation.