> The nice thing about ChatGPT is that I can ask for sources
And it will make them up just like it does everything else. You can’t trust those either.
In fact, one of the simplest ways to find out a post is AI slop is by checking the sources posted at the end and seeing they don’t exist.
Asking for sources isn’t a magical incantation that suddenly makes things true.
> It isn’t guaranteed that content written by humans is necessarily correct either.
This is a poor argument. The overwhelming difference with humans is that you learn who you can trust about what. With LLMs, you can never reach that level.
> And it will make them up just like it does everything else. You can’t trust those either.
In tech-related matters such as coding, I've come to expect every link ChatGPT provides as reference/documentation is simply wrong or nonexistent. I can count with fingers from a single hand the times I clicked on a link to a doc from ChatGPT that didn't result in a 404.
I've had better luck with links to products from Amazon or eBay (or my local equivalent e-shop). But for tech documentation which is freely available online? ChatGPT just makes shit up.