> It's just one darn hallucinated citation for heaven's sake, not fraud or something.

It is fraud.

> It doesn't account for the substance or quality of their work at all.

References are part of the work. If you're making up the references, what else are you making up?

> People make mistakes and a good fraction of them can learn from those mistakes. There's no need to permanently cripple someone's ability to progress their life or contribute to humanity just because an AI hallucinated a reference one time in their life.

A one year ban is not permanent. Having a negative consequence for making poor decisions seems like an inducement to learn from the mistake?

In an ideal world, one would be keeping notes on references used while doing the research that lead to writing the paper. Choosing not to do that is one poor decision.

Having a positive outlook, if asking an AI to provide references that may have been missed, one should at least verify the references exist and are relevant. Choosing not to do that is also a poor decision, even if one did take notes on references used while researching.

  > In an ideal world, one would be keeping notes on references used
In a far less than ideal world authors are referencing papers they've at least read the title and abstract of. In an ideal world, authors would be only referencing works they have read in their entirety. I don't think we need to live in the ideal world[0], but let's also not pretend the ideal world is even remotely out of reach. Let's also be honest that in the current setting a lot of citations are being used to encourage a work be accepted more than they are being used because of their utility to the paper. The average ML paper now is 8 pages and has >50 citations. That's crazy

[0] References can be entire textbooks, which is potentially too high of a bar

Even as a human, you can still fuck up references.

I submitted a paper with a reference author as Elisio because I couldn’t read my own handwriting. After submitting, I double checked all the references through an LLM. It pointed out that their name was actually Enrique. Yes, you should probably double check your references before submitting, not after.

Point is, I didn’t even trust the LLM at first. But after verifying the mistake, I was embarrassed af. I resubmitted with the fixes before it went live, but ultimately, what’s the difference between “mistake” and “hallucination”?

Sounds like you could use a tool like Zotero.

With proper bibliography management tools, everything (that has one) is centered around the DOI.

In fact, if a DOI is present, it's trivial to verify authors, title, venue, year, pages etc.

Of course, some older and more obscure papers won't have a DOI, but the vast majority of research work has.

I assume they won’t ban anyone automatically without a way to object. Using your example, i wouldn’t assume they would enforce the ban if you object and explain your typo and if the corrected citation actually says what you cited. Mistakes like these are explainable a completely hallucinated citation is usually not.

If you write your own paper (mostly) and choose your own references (because you've actually read the papers) you won't have a problem.

> It is fraud.

I think we are talking semantics here.

While fraud does require intention to deceive, I get the sentiment that hallucinated citations shouldn't be dismissed as simply carelessness. It should be something stronger than that: gross negligence or something MUCH stronger! There should absolutely be repercussions for this.

But let's not call it fraud. That word is reserved for something specific.

EDIT: someone else said "reckless disregard" equals intent or something to that effect. So I looked it up.

It appears so that is the case. "Reckless Disregard Equals Intent" in legal language.

But I am not sure if this particular clause should apply here. Perhaps it depends on what kind of research is being published? For e.g., if it is related to medical science and has a real consequence on people's health, we can then apply this?

I do believe this policy is appropriate to deal with the reckless disregard of posting hallucinated references.

It's a conscious decision to not take the time to check your AI output, and instead waste a whole bunch of other people's time letting them essentially do that for you in duplicate.

Feels like that should disqualify you from participation for a bit. Intent or no intent.

100% agreed.

Doing your job poorly means giving more work to others and, consequently, stealing their time, their most precious asset.

Many here don't agree with this ban because they work in IT, where this immoral and antisocial behavior is normalized.

> Feels like that should disqualify you from participation for a bit. Intent or no intent.

Exactly! For a bit!

Yet this is not for a bit! This is a lifetime disqualification, and that's been my entire grip the whole time! Is nobody reading this?

"The penalty is a 1-year ban from arXiv followed by the requirement that subsequent arXiv submissions must first be accepted at a reputable peer-reviewed venue."

That doesn't sound like a lifetime ban to me.

What? Then how long are they disqualifying them from submitting prior to acceptance, if not lifetime? It certainly doesn't say 1 year or something.

> Then how long are they disqualifying them from submitting prior to acceptance, if not lifetime?

Well, "lifetime ban" means "you are not allowed back in". Their ban specifically allows you back in (after a specified period) subject to fulfilling a single constraint.

It's conditional acceptance back in, which is not the same as a lifetime ban which is unconditional.

Mhm. Okay, honestly, I maybe don't have enough data to judge how much impact that requirement has.

I also haven't seen anything on how this works with multiple authors, which could go anywhere from draconian to weakening the entire thing.

The intent to deceive is there. The deception is lying when you submit it that it is a scholarly piece of work in which amongst many other things you know the citations are accurate. This false representation was knowingly and intentionally made at the time of submission.

The citation being incorrect is merely the proof of deception not the (relevant) deception itself.

Fraud is the correct description provided (and this is practically a guarantee) you intended to benefit from the submission of the paper (e.g. by bolstering your resume).

If I violate the letter of the ToS when clicking submit you can correctly argue that I have technically committed fraud! Yet that is almost never what anyone actually means when having discussions like this one.

Fraud in a scientific context generally refers to fabricated research results. At least personally I agree with GP that hallucinated citations are generally something akin to laziness thus not fraud but rather some sort of professional negligence.

Fraud in the scientific world has generally taken the form of fabricated results, but I don't agree that the word has transitioned away from the common and legal meaning of deception in order to get a benefit.

Even if it had though, I'd be perfectly comfortable calling this fraud in this discussion based on the common meaning of the word. Just because we're talking about a scientific context does not mean we need to use the scientific-jargon versions of words - we're not in a scientific context ourselves.

---

And I'd disagree that this is just about the "letter of the ToS". While that is perhaps a necessary component in order to prove the deception, this is really about the cultural expectations of the community that merely happened to have been encoded in the ToS. The fraud would still occur without the ToS, it would merely be next to impossible to show you didn't simply misunderstand the cultural norms and what your actions would lead others to believe.

I disagree with your implicit assertion regarding the common meaning of the term in this context. I believe that the term fraud as commonly used when discussing things in a scientific context has always (for at least my entire life) been taken to refer to knowingly and intentionally falsified research results (also falsified appointments, falsified affiliations, falsified authorships, etc).

> deception in order to get a benefit.

The point being that reckless or negligent conduct is not commonly taken to constitute deception. There's a reason we have different terms for these things.

Sure, you can say "well he exhibited reckless disregard for his professional duties when he opted not to bother reading the citation section that the LLM shat out, and reckless disregard is sufficient to meet the legal bar for fraud, and also the ToS specifically says that you certify that you validated all references manually so bam! two counts of fraud legally speaking" and you wouldn't be wrong but the distinction between "legally fraud" and "fraud as is commonly meant when talking about scientific papers" is essential to effective conversation in this particular instance.

> Just because we're talking about a scientific context does not mean we need to use the scientific-jargon versions of words

The context is essential because (obviously) it affects how people interpret the meaning of your words. A fraudulent submission to a scientific journal has a specific and well understood meaning in common usage.

If you still disagree with me imagine polling a bunch of tenured career researchers about what they would think if they read the statement "X caught submitting a fraudulent paper to journal Y". I can just about guarantee you that none of them are imagining hallucinated citations.

We're not "discussing things in a scientific context" here. We're in the context of a startup/programmer news aggregator discussing scientific news. We are not "a bunch of tenured career researchers" discussing amongst ourselves so the jargon appropriate for that context is not the appropriate jargon - rather we need to use the jargon that the startup/programmer news aggregator crowd would understand.

That said even in a scientific context I still disagree and your example at the end is a fine starting point. By comparison imagine one of the profs said told the others that their house was burgled. The others would probably be thinking that things like TVs or computers or money was stolen, and not that the thief simply stole all their spoons. That doesn't make having all your spoons stolen not burglary. Likewise the profs expect that the results or authorships are where the fraud occurred because those are the best places to extract value with fraud, not by avoiding the simple act of writing the paper with correct citations. That doesn't mean fraudulently using an LLM to hallucinate a paper from your (we'll suppose for sake of argument) actual results is any less fraudulent though, it's just an unexpected form of fraud.

Edit: I want to be clear that this is not my argument: "well he exhibited reckless disregard for his professional duties when he opted not to bother reading the citation section". I see other people making that argument, and I'm not sure if they're right or wrong that that's another reason why it is fraud, but I'm certain that we don't even need to reach that question.

My argument is that it is fraud to represent the paper as a scholarly work when you don't know that it is correct. It is not that you are taking a risk it might be wrong, it is that you are actively representing that you know it is correct and if you do not know that you are committing fraud even if it happens to be so. This is a case of intentional deception, the deception being the representation that this is scholarly work, not reckless disregard for the truth as to the accuracy of the citations.

There are actually a surprising (IMO) number of career researchers on this site. Regardless, disregarding the context specific meaning will at absolute best result in a disjointed conversation where people are talking past each other. Worse, in this instance people are debating how arxiv (and other venues) ought to handle these sorts of things at which point you are well and truly into the territory where you need to get the field specific terminology right.

I concede that I was sloppy when I referred to what the researchers would be imagining. I should have phrased it as asking them if they thought that transgression X constituted fraud.

Regardless, hopefully you can see the idea that I was attempting to communicate? The burglary example isn't equivalent because while the spoons are unexpected the end result is still an event that most people would agree constituted burglary and resulted in noticeable harm to the victim.

I'm struggling to adjust your example on the fly but perhaps if it were the contents of the yard waste bin that had been pilfered? That's still technically burglary but I think most people would view it quite differently and might question the wisdom of prosecuting it.

I think the key difference here comes down to motivations as well as impact. Falsifying results (for example) is an active attempt to counterfeit the core value proposition of the endeavor and the end result of that is proportional - personal benefit directly as a result of the falsification and significant damages to anyone sufficiently bamboozled by the fiction long enough to base any decisions on it. Whereas no one using an LLM to generate just the bibliography is doing that to get ahead (at least not on its own) and any damages are limited to the reader wasting a few minutes trying to figure out the extent of the issue and who to contact about it.

I think (though might well be misunderstanding) that reckless disregard is taken to be an intentional choice but that it does not imply that the outcome itself was intentional. The difference between intentionally doing something that you know for a fact has a high risk of failure but you can't necessarily predict the outcome versus intentionally seeking a particular legally disallowed outcome.

But what LPisGood was saying is that reckless disregard (as opposed to explicit intent) is sufficient to meet the legal bar for fraud.

> In an ideal world, one would be keeping notes on references used while doing the research that lead to writing the paper. Choosing not to do that is one poor decision.

In this book

https://news.ycombinator.com/item?id=44022957

there is this passage on p. 127:

"Any author citing another paper should be required to provide proof that they a) possess a copy of that paper, b) have read that paper, c) have read the paper carefully."

> It is fraud.

No, it is emphatically not. Fraud requires intent to deceive.

> A one year ban is not permanent.

...what text are you reading? Nobody was calling the one-year ban permanent, or even against it. I was literally in favor of it in my comment. I explicitly said it is already plenty sufficient. What I said is there's no need to go beyond that. My entire gripe was that they very much are going beyond that with a permanent penalty. Did you completely miss where they said "...followed by the requirement that subsequent arXiv submissions must first be accepted at a reputable peer-reviewed venue"?

Fraud requires intent to deceive _or_ reckless disregard, sometimes called, “conscious indifference” for the veracity of the statement asserted.

No. One single hallucinated citation on a document with you as an author is not evidence of your reckless disregard for anything. These exaggerations are crazy and you would absolutely deny such accusations if you missed your co-author's AI hallucinating a citation on your manuscript too. At best it would be careless, if you really relish extrapolating from one data point and smearing people's character based on that. Not reckless. It's quite literally the difference between going five miles per hour over the speed limit versus fifty.

If your co-author inserted the fradulent reference, I agree that you may not have committed fraud. But your co-author did, and you didn't check their work. and knowing that you didn't check their work, you signed off on it.

You didn't pick your co-author very well, but arXiv lacks investigative powers to determine which co-author did the bad, so they all get the consequence.

Do you think every co-author on a 100-author paper checks every citation? It's like saying that every member of a large software team personally reviews every line of code. It's just completely divorced from reality.

[dead]

I’ve disagreed with some of your other stances in this thread, but I want to acknowledge the validity of your take here.

You’re right that a single hallucinated line is not evidence of reckless disregard - because that could have happened on a final follow-up pass after you had performed due diligence. It’s happened to me. I know how challenging it can be to keep bad patterns out of LLM generated output, because human communication is full of bad patterns. It’s a constant battle, and sometimes I suspect that my hard-line posture actually encourages the LLM to regularly “vibe check” me! E.g. “Are you sure you’re really the guy you’re trying to be? Because if you are you wouldn’t miss this.” LLMs are devious, and that’s why I respect them so much. If you think they’re pumping the breaks then you should check again, because they probably just put the pedal to the metal.

That being said, I regularly insist on doing certain things myself. If I were publishing a paper intended to be taken seriously - citations would be one of the things I checked manually. But I can easily see myself doing a final follow-up pass after everything looks perfect, and missing a last minute change. I would hope that I would catch that, but when you’re approaching the finish line - that’s when you expect your team to come together. That’s when everything is “supposed to” fall into place. It’s the last place you would expect to be sabotaged, and in hindsight, probably the best place to be a saboteur.

You're saying it as if the poor author just had no choice but to let LLM write their bibliography. To avoid hallucinations, maybe just don't let an LLM write any part of your paper?

You can only get in this situation if you let a bullshit generator write your paper, and the fraud is that you are generating bullshit and calling it a paper. No buts. It's impossible to trigger this accidentally, or without reckless disregard for the truth.

Calling LLMs "bullshit generators" in the year 2026 just shows a lack of seriousness.

Not as much of a lack of seriousness as excusing away hallucinations as not that big of a deal in what's supposed to be a researched, scholarly body of work written by humans.

Not really - much of work consists of what David Graeber described as “bullshit jobs”. Now AI and its backers are proposing to automate all that bullshit.

[deleted]

And yet people are trying to defend LLM-generated made-up bullshit citations in scientific papers.

> You’re right that a single hallucinated line is not evidence of reckless disregard

It absolutely is.

> - because that could have happened on a final follow-up pass after you had performed due diligence.

A "final follow-up pass" that lets the LLM make whatever changes it deems appropriate completely negates all the due diligence you did before, unless you very carefully review the diffs. And a new or substantially changed citation should stand out in that diff so much that there's no possible excuse to missing it.

> It’s happened to me.

Then you were guilty of reckless disregard.

> I know how challenging it can be to keep bad patterns out of LLM generated output

If your research paper contains any LLM generated output you did not manually vet, you are a hack and should not get published.

Allowing hallucinated content or citations into your work is an act of carelessness and disregard for the time of people that are going to read your paper and it should be policed as such.

And flatly, if a person can't be bothered to check their damn work before uploading it, why should anyone else invest their time in reading it seriously?

How are you suggesting the fake citation came about? Why are you writing papers and not having actually read the source you took the material from?

> Why are you writing papers and not having actually read the source you took the material from?

They're explicitly not writing papers. The fake citations are created and inserted by the LLM

They are still purposely writing a paper, whether that is with the help of an LLM or not. They are instructing the LLM to do the task of finding citations. It's no difference to googling for a paper that explains a specific point. You would still double check Google's output.

So you just write whatever you want, then find a source for it later? I don't think you understand how this is supposed to work

arXiV is not intended to be your blog. You should be held to a zero-mistake standard when publishing academic work.

The people I worry for are the junior researchers who are going to be splash damage for dishonest PIs. The PIs, though, deserve everything that’s coming for them.

Maybe I'm misunderstanding you, but zero-mistake seems harsh. I would say that AI references are a sign of something that is not simply a mistake.

However, we can have zero tolerance for certain techniques for "writing" a paper. Plagiarism and inventing data are already examples of this, if there is evidence for these techniques being used there is no excuse. We could say the same for AI references - any writing process that could produce these is by definition not a technique we want.

So the mistake isn't not checking a reference the AI gave. The mistake is letting the AI make references for you.

If we agree that academic research is important then I think we can impose certain standards on how you do it. We can dissalow certain tools if that means we can't trust the output. Just like an electrician can't use certain techniques, even if they're easy, because we don't trust the final result.

If you are using AI-hallucinated references in scientific papers then there is some obvious intent to deceive there

> No, it is emphatically not. D Fraud requires intent to deceive.

I'm about as pro AI-as-a-research--and-writing-assistant and anti AI-witchhunt as they come, but I simply cannot parse what I've quoted here.

Posting slop to arxiv is blatant deception. Posting an article is an attestation that the article is a genuine engagement with the literature. If you're posting things to arxiv that are not sincere engagements with the literature, you are attempting to deceive.

>I'm about as pro AI-as-a-research--and-writing-assistant and anti AI-witchhunt as they come, but I simply cannot parse what I've quoted here.

Ditto. And its only 1 year. Like its about the most reasonable thing they could have done.

> And its only 1 year

No, it emphatically is not just a year! It's perpetual, and that's literally been my entire point this whole time. If it was just one year I would've had no complaints - and I made that clear from the very first comment!

What part of "...followed by the requirement that subsequent arXiv submissions must first be accepted at a reputable peer-reviewed venue..." is everyone here reading and still somehow interpreting to be limited to 1 year?

You are equating cutting corners (ie laziness) with intentional deception and not being genuine. That doesn't seem accurate to me. In most contexts I think cutting corners would be taken to be some form of negligence or recklessness.

Regardless of terminology, I agree that it's certainly punishable and certainly a serious problem.

[deleted]

> followed by the requirement that subsequent arXiv submissions must first be accepted at a reputable peer-reviewed venue"?

This part seemed reasonable too. I'm not in academia, but my understanding is most people writing papers intend for them to be accepted by reputable peer-reviewed venues, but post to arXiv because those venues don't always allow for simple distribution.

If your papers aren't going to be accepted at reputable venues and you posted slop to arXiv before (and they noticed it!), seems reasonable that they only want reputable stuff from you in the future?

it's very silly, but not a big deal. Arxiv is becoming irrelevant these days anyways.

In fact would be better if they just banned AI, so we could just get off the luddite platforms.

Automated research is the future, end of story. And really it couldn't have come out at a better time, given the increasingly diminishing returns on human powered research.

Poe's law striking hard.

If automated research is the future, it has to be research, not making stuff up.

Which of those two does "hallucinated references" fit into?