No, nothing of value. If you ever want to lose faith in the future of humanity search "@grok" on Twitter and look at all the interactions people have with it. Just total infantilism, people needing tl;drs spoon-fed to them, needing summarization and one-word answers because they don't want to read, arguing with it or whining to Musk if they don't get the answer they want to confirm what they already believe.
I bookmarked this example where it is confidently incorrect about a movie frame/screenshot:
https://x.com/Pee159604/status/1909445730697462080
Your example doesn't appear to contain a reply from grok, only a question.
It does, you just can't see it without logging in because Twitter is shit now.
https://xcancel.com/Pee159604/status/1909445730697462080
I was logged in but it wasn't showing, but it's showing now.
> confidently incorrect
I disagree. Grok had a crack and got it wrong. LLMs get things wrong sometimes.
Besides, it said "likely from Species", which is guesswork. The original post is garbage. "Chimp the fuck out"... I don't even know what that means, so Grok didn't have much to go on by analysing the "green text".
the worst is like a dozen people in the replies to a post asking Grok the exact same obvious follow-up question. Somehow, having access to an LLM has completely annihilated these commenters' ability to scroll down 50 pixels.
> needing summarization
Before we get too excited with disparaging those seeking summaries, it's common for people of all levels to want summary information. It doesn't mean they want everything summarized or are bad people.
I'm not particularly interested in "tariffs, what are they good for, what's the history and examples good or bad"... so I asked for a summary from grok. It gave me a decent summary. Concise and structured. I asked a few follow-ups, then went on with my life knowing a little more than nothing about tariffs. A win for summarized information.
> people needing tl;drs spoon-fed to them, needing summarization and one-word answers because they don't want to read
It's bad that this need exists. However, introducing this feature did not create the need. And if this need exists, fulfilling it is still better, because otherwise these kind of people wouldn't get this information at all.
This is worse because the AI slop is full of hallucinations which they will now confidently parrot. No way in hell does this type of person verify or even think critically about what the LLMs tell them. No information is better than bad information. Less information while practicing the ability to critically use it is better than bad information in excess.
Do you have examples of recent models hallucinating when asked to summarize a text?
All decent people I know have deleted their Twitter accounts - the kind of people you now see on twitter in the mentions are... not good people.
"@gork explain this tweet"