> I use ChatGPT to learn about a variety of different things
Why do you trust the output? Chatbots are so inaccurate you surely must be going out of your way to misinform yourself.
> I use ChatGPT to learn about a variety of different things
Why do you trust the output? Chatbots are so inaccurate you surely must be going out of your way to misinform yourself.
I try to make my best judgment regarding what to trust. It isn’t guaranteed that content written by humans is necessarily correct either. The nice thing about ChatGPT is that I can ask for sources, and sometimes I can rely on that source to fact check.
> The nice thing about ChatGPT is that I can ask for sources
And it will make them up just like it does everything else. You can’t trust those either.
In fact, one of the simplest ways to find out a post is AI slop is by checking the sources posted at the end and seeing they don’t exist.
Asking for sources isn’t a magical incantation that suddenly makes things true.
> It isn’t guaranteed that content written by humans is necessarily correct either.
This is a poor argument. The overwhelming difference with humans is that you learn who you can trust about what. With LLMs, you can never reach that level.
> And it will make them up just like it does everything else. You can’t trust those either.
In tech-related matters such as coding, I've come to expect every link ChatGPT provides as reference/documentation is simply wrong or nonexistent. I can count with fingers from a single hand the times I clicked on a link to a doc from ChatGPT that didn't result in a 404.
I've had better luck with links to products from Amazon or eBay (or my local equivalent e-shop). But for tech documentation which is freely available online? ChatGPT just makes shit up.
Sure, but a chatbot will compound the inaccuracy.
Chatbots are more reliable than 95% of people you can ask, on a wide variety of researched topics.
Yeah... you're supposed to ask the 5%.
If you have a habit of asking random lay persons for technical advice, I can see why an idiot chatbot would seem like an upgrade.
Surely if you have access to a technical expert with the time to answer your question, you aren't asking an AI instead.
Books exist
chatGPT exists
(I'm not saying not to read books, but seriously: there are shortcuts)
...and is unreliable, hence the origin of this thread.
If I want to know about the law, I'll ask a lawyer (ok, not any lawyer, but it's a useful first pass filter). If I want to know about plumbing I'll ask a plumber. If I want to ask questions or learn about writing I will ask one or more writers. And so on. Experts in the field are way better at their field than 95% of the population, which you can ask but probably shouldn't.
There are many 100's of professions, and most of them take a significant fraction of a lifetime to master, and even then there usually is a daily stream of new insights. You can't just toss all of that information into a bucket and expect that to outperform the < 1% of the people that have studied the subject extensively.
When Idiocracy came out I thought it was a hilarious movie. I'm no longer laughing, we're really putting the idiots in charge now and somehow we think that quantity of output trumps quality of output. I wonder how many scientific papers published this year will contain AI generated slop complete with mistakes. I'll bet that number is >> 0.
Surely you don't always call up and pay for a lawyer any time you have an interest or question about law, you google it? In what world do you have the time, money and interest to ask people about every single thing you want some more information about.
I've done small plumbing jobs after asking AI if it was safe, I've written legal formalia nonsense that the government wanted with the help of AI. It was faster, cheaper and I didn't bother anyone with the most basic of questions.
Indeed. The level of intellectual dishonesty on this page is staggering.
In some evaluations, it is already outperforming doctors on text medical questions and lawyers on legal questions. I'd rather trust ChatGPT than a doctor who is barely listening, and the data seems to back this up.
The problem is that you don't know on what evaluations and you are not qualified yourself. By the time you are that qualified you no longer need AI.
Try asking ChatGPT or whatever is your favorite AI supplier about a subject that you are an expert about something that is difficult, on par with the kind of evaluations you'd expect a qualified doctor or legal professional to do. And then check the answer given, then extrapolate to fields that you are clueless about.
Sure, so long as the question is rather shallow. But how is this any better than search?
That's the funny thing to me about these criticisms. Obviously it is an important caveat that many clueless people need to be made aware of, but still funny.
AI will just make stuff up instead of saying it doesn't know, huh? Have you talked to real people recently? They do the same thing.