I hear this repeated so many times I feel like its a narrative pushed by the sellers. Year ago you could ask for glass of wine filled to the brim and you just wouldnt get it. It wasnt garbage in, garbage out, it was sensibility in, garbage out.
The line where chatbots stop being sensible and start outputting garbage is in movement, but slower than avg joe would guess. You only notice it when you get an intuition of the answer before you see it, which requires a lot of experience on range of complexity. Persisten newbies are the best spotters, because they ask obvious basic questions while asking for stuff beyond what geniuses could solve, and only by getting garbage answer and enduring a process of realizing its actually garbage they truly make wider picture of AI than even most powerusers, who tend to have more balanced querries.
But doesn’t happen the same with other tools. I’ll give the same exact prompt to all of LLMs I have access to and look at the responses for the best one. Grok is consistently the worst. So if it’s garbage in, garbage out, why are the other ones so much better at dealing with my garbage?
Unfortunately we are still in the prompt optimization stage, garbage in garbage out
I hear this repeated so many times I feel like its a narrative pushed by the sellers. Year ago you could ask for glass of wine filled to the brim and you just wouldnt get it. It wasnt garbage in, garbage out, it was sensibility in, garbage out.
The line where chatbots stop being sensible and start outputting garbage is in movement, but slower than avg joe would guess. You only notice it when you get an intuition of the answer before you see it, which requires a lot of experience on range of complexity. Persisten newbies are the best spotters, because they ask obvious basic questions while asking for stuff beyond what geniuses could solve, and only by getting garbage answer and enduring a process of realizing its actually garbage they truly make wider picture of AI than even most powerusers, who tend to have more balanced querries.
Maybe. That could be true.
But doesn’t happen the same with other tools. I’ll give the same exact prompt to all of LLMs I have access to and look at the responses for the best one. Grok is consistently the worst. So if it’s garbage in, garbage out, why are the other ones so much better at dealing with my garbage?
I think it meant in the training stage, not inference.