"He (the author) did not answer our questions asking if he used an LLM to generate text for the book. However, he told us, “reliably determining whether content (or an issue) is AI generated remains a challenge, as even human-written text can appear ‘AI-like.’ This challenge is only expected to grow, as LLMs … continue to advance in fluency and sophistication.”

Lol, that answer sounds suspiciously much like LLM generated as well ..

It's true that "AI detection algorithms" are not particularly reliable.

It's also true that if you have fake CITATIONS in your works that such algorithms aren't necessary to know the work is trash - either it was written by AI or you knowingly faked your research and it doesn't really matter which.

You would think that Springer did the due diligence here, but what is the value of a brand such as Springer if they let these AI slops through their cracks?

This is an opportunity for brands to sell verifiability, i.e., that the content they are selling has been properly vetted, which was obviously not the case here.

Back when I was doing academic publishing I'd use a regex to find all the hyperlinks, then a script (written by a co-worker, thanks again Dan!) to determine if they were working or no.

A similar approach should work w/ a DOI.

In the past I've had GPT4 output references with valid DOIs. Problem was the DOIs were for completely different (and unrelated) works. So you'd need to retrieve the canonical title and authors for the DOI and cross check it.

A classic case.

I work on Veracity https://groundedai.company/veracity/ which does citation checking for academic publishers. I see stuff like this all the time in paper submissions. Publishers are inundated

And then make sure the arguments and evidence it presents are as the LLM represented them to be.

Not all journals require a DOI link for each reference. Most good ones do seem to have a system to verify the reference exists and is complete; I assume there’s some automation to that process but I’d love to hear from journal editorial staff if that’s really the case.

Why would one think that? All of the big journal publishers have had paper millers and fraudsters and endless amounts of "tortured phrases" under their names for a long, long time.

>> LLM-generated citations might look legitimate, but the content of the citations might be fabricated.

Friendy reminder that the entire output from an LLM is fabricated.

probably a better word here would be fabulated.

on edit: that is to say the content of the citations might be fabulated, while the rest is merely fabricated.

I didn't realize "fabulated" was a word. TIL, thank you. But in this case it doesn't sound like the right word; it means: "To tell invented stories, often those that involve fantasy, such as fables."

I think "confabulated" is more appropriate: "To fill in gaps in one's memory with fabrications that one believes to be facts."

Technically yes, but not all of it has lost grounding with reality?

You could say that about Alice in Wonderland.

Fabricate is a word with ambiguous meaning. It can mean both "make up", but also simply "produce".

I think in this situation both meanings are needing to be used. It produced made up content.

I fabricated this reply out of my brain.

One of the potential uses of AI that I have most wanted is automated citation lookup and validation.

First check if the citation references a real thing. Then actually read and summarize the referenced text and give a confidence level that it says what was claimed.

But no, we have AI that are compounding the problem. That says something about unaligned incentives.

> One of the potential uses of AI that I have most wanted is automated citation lookup and validation.

Also one of the things AI is likely the least suited for.

best I could imagine an AI can do is offer sources for you to check for a given citation.

>Also one of the things AI is likely the least suited for.

I agree, if we are using the current idea of AI as language models.

But that’s very limiting. I’m old enough to remember when AI meant everything a human could do. Not just some subset that is being deceptively marketed as potentially the whole thing.

Unfortunately not surprising, the quality of a lot of textbooks has been bad for a long time. Students aren't discerning and lecturers often don't try the book out themselves.

I agree. I feel that Springer is not doing enough to uphold their reputation. One example of this being a book on RL that I found[1]. It is clear that no one seriously reviewed the content of this book. They are, despite its clear flaws charging 50+ euro.

https://link.springer.com/book/10.1007/978-3-031-37345-9

Yeah, ages ago, when I was doing typesetting, it was disheartening how unaware authors were of the state of things in the fields which they were writing about --- I'm still annoyed that when I pointed out that an article in an "encyclopedia" on the history of spreadsheets failed to mention Javelin or Lotus Improv it was not updated to include those notable examples.

Magazines are even worse --- David Pogue claimed Steve Jobs used Windows 95 on a ThinkPad in one of his columns, when a moment's reflection, and a check of the approved models list at NeXT would have made it obvious it was running NeXTstep.

Even books aren't immune, a recent book on a tool cabinet held up as an example of perfection:

https://lostartpress.com/products/virtuoso

mis-spells H.O. Studley's name on the inside front cover "Henery" as well as making many other typos, myriad bad breaks, pedestrian typesetting which poorly presents numbers and dimensions (failing to use the multiplication symbol or primes) and that they are unwilling to fix a duplicated photo is enshrined in the excerpt which they publish online:

https://blog.lostartpress.com/wp-content/uploads/2016/10/vir...

where what should be photo of an iconic pair of jewelers pliers on pg. 70 is replaced with that of a pair of flat pliers from pg. 142. (any reputable publisher would have done a cancel and fixed that)

Sturgeon's Law, 90% of everything is crap, and I would be a far less grey, and far younger person if I had back all the time and energy I spent fixing files mangled by Adobe Illustrator, or where the wrong typesetting tool was used for the job (the six-weeks re-setting the book re-set by the vendor in Quark XPress when it needed to be in LaTeX was the longest of my life).

EDIT: by extension, I guess it's now 90% of everything is AI-generated crap, 90% of what's left is traditional crap, leaving 1% of worthwhile stuff.

What reputation would that be?

It was, in part, Springer that enabled Robert Maxwell.

Understandably I'm becoming a bit dogmatic but I'll say it again, AIMA/PRML/ESL are still the best reference textbooks for foundational AI/ML and will be for a long time.

AIMA is Artificial Intelligence: A Modern Approach by Stuart Russell and Peter Norvig

PRML is Pattern Recognition and Machine Learning by Christopher Bishop.

ESL is Elements of Statistical Learning by Trevor Hastie, Robert Tibshirani and Jerome Friedman.

I saw this recently on some congress abstracts. I think it is just AI generated content. References look real and don’t exist.

To imagine this driving a singularity, meanwhile its putting the final nail in science, together with paper-spam and research-reward decline. They are going to hang us tech-priests from the lamp-posts when the consequences of this bullshit artistry hit home.

This seems like the very thing that AI advocates would want to avoid. It certainly doesn't fill me, as an outsider to the whole thing, with much confidence for the future of AI-generated content but maybe I'm not the target sucker....err, I mean target demographic

Bad news for old-school people who still love books as a learning resource.

If I make a citation verifier, will conference/journal guys pay for it? First verify if the citation is legit, like the paper actually exists, after that another LLM that reads the paper cited and gives a rating out of 10, whether it fits the context or not. [ONLY FOR LIT SURVEY]

No, they aren't paying the reviewers in the first place.

Given that the existence of a reference is fairly trivial to check, I'd wager the authors would not care enough to pay for this. As for 'fit', this is very much in the eye of the beholder and a paper can be cited for the most trivial part. Overcitation is usually not seen as a problem. Omitting citations the reviewer considers 'essential', often from their own lab or circles, is seen as non-negotiability.

So the better 'idea' would be to produce a CYA citation assistant that for a given paper adds all the remotely plausible references for all the known potential reviewers of a journal or conference. I honestly think this is not a hard problem, but doubt even that can be commercialized beyond Google Ads monetization.

So given that the output of an LLM is unreliable at best, your plan is to verify that a LLM didn't bullshit you by asking another LLM?

That sounds... counterproductive

You’re offering to doublecheck measurements made with a bad ruler by using that same ruler.

Would it be possible to 'squat' the non existent references and turbo boost oneself into 'most cited author' territory? :)

So was the entire text machine-generated?

Or did they take a human-written text and asked a machine to generate references/citations for it?

Many write and then find citations to fit what they said rather than write based on what citable sources suggest

Why would anyone write a book then ask for citations?

Because collecting/formatting citations is not the most fun part of the writing process (?)

And maybe the authors were over-confident in the capabilities of current AI.

Springer? You mean the publisher we are currently fighting so they won't mess up our peer-reviewed research paper that we wrote and paid for the privilege for them to mess up (ehm, sorry "publish")? Colour me surprised.

We are approaching publishers' heaven, where AI reviewers review AI written books and articles (with AI editors fixing their style), allowing publishers to keep collecting billions from essentially mandatory subscriptions from institutions.

It's fine, because human readers will also be replaced with AI which produce a quick summary ;)

My "Plagiarism Machine #1 Fan" shirt has people asking a lot of questions already answered by my shirt.

'Based on a tip from a reader, we checked 18 of the 46 citations in the book.' Why not just check them all?

They didn't just click a link. They contacted the supposed authors for comment. That would be a reason for not checking all of them.