AIs that were trained on data obtained through naughty channels actively avoid citing sources and full passages of reference text, otherwise they'd give the game away. This seems to increase the chance of them entirely hallucinating sources too.
Unfortunately, the citations are generally quite low quality and have in my experience a high rate of not actually supporting the text they're attached to.
In my experience they just add random links at the bottom that are often unrelated to the response they give; there’s absolutely no guarantee that they did read them or that their response is based on them.
Sometimes they hallucinate them, or if they exist, sources include blatant nonsense (like state owned propaganda, such as RT) / don't support the claims made by the output.
wtf are you conversing with LLMs that you regularly are running into "state owned propaganda" in the references? my "blatant nonsense" detector is going off...
My favorite is when it cites 5 sources, and 1 of them is a real source, and then the other 4 are short form junk that point to the first one as the source. So basically its just picked one article and summarized it for you and not picked any info from any other places. Oh and bonus points when I type the exact same prompt into a search engine, and that 1 source is the top search result anyways.
You're using the Research model that isn't available to Free users. As a pupil myself, I can vouch for the fact that nobody is using the Research models here.
Even if a pupil does pay, they will either be too lazy to wait the nearly 10 minutes it takes for the AI to do its research, or they actually care about getting good grades and therefore won't outsource their research to AI.
You can replicate on the free tier. You should try it. I'm just pointing out that the loudest anti-ai voices often either haven't used the models at all or are basing their bad opinions on outdated versions. Any opinion made about ChatGPT with GPT 3.5 is basically irrelevant at this point.
AIs that were trained on data obtained through naughty channels actively avoid citing sources and full passages of reference text, otherwise they'd give the game away. This seems to increase the chance of them entirely hallucinating sources too.
Have you used one recently? The big providers all cite sources if give a research prompt.
Unfortunately, the citations are generally quite low quality and have in my experience a high rate of not actually supporting the text they're attached to.
This is on par with humans, honestly. I’ve dug into cited studies by consulting firms that were 100% false.
In my experience they just add random links at the bottom that are often unrelated to the response they give; there’s absolutely no guarantee that they did read them or that their response is based on them.
Sometimes they hallucinate them, or if they exist, sources include blatant nonsense (like state owned propaganda, such as RT) / don't support the claims made by the output.
what's worse is when they cite clearly LLM generated articles from web
wtf are you conversing with LLMs that you regularly are running into "state owned propaganda" in the references? my "blatant nonsense" detector is going off...
My favorite is when it cites 5 sources, and 1 of them is a real source, and then the other 4 are short form junk that point to the first one as the source. So basically its just picked one article and summarized it for you and not picked any info from any other places. Oh and bonus points when I type the exact same prompt into a search engine, and that 1 source is the top search result anyways.
https://www.euronews.com/next/2025/11/04/ai-chatbots-are-spe...
Original study: https://www.isdglobal.org/digital-dispatch/investigation-tal...
Do you people even use the models or do you just lie about them?
https://chatgpt.com/share/6984c899-6cc4-8013-a8f6-ec204ee631...
You're using the Research model that isn't available to Free users. As a pupil myself, I can vouch for the fact that nobody is using the Research models here.
Even if a pupil does pay, they will either be too lazy to wait the nearly 10 minutes it takes for the AI to do its research, or they actually care about getting good grades and therefore won't outsource their research to AI.
You can replicate on the free tier. You should try it. I'm just pointing out that the loudest anti-ai voices often either haven't used the models at all or are basing their bad opinions on outdated versions. Any opinion made about ChatGPT with GPT 3.5 is basically irrelevant at this point.