In the future, we'll probably lose the ability to verbalize or construct sentences because our thoughts will be directly understood by LLMs, it'll be too easy and convenient.
In the future, we'll probably lose the ability to verbalize or construct sentences because our thoughts will be directly understood by LLMs, it'll be too easy and convenient.
The shareholders yearn for the Torment Nexus.
And to think, I grew up thinking Greg Egan and Iain Banks were (mostly) trying to write hopeful stories. It was dystopian all along!
Oh well, time to kill all the weirdos.
They'll give up talking to us too, and just interface through our ears. The LLM earpiece will just make some 2800 baud modem noises and we'll move around like marionettes.
Not quite a wordless scenario, but after seeing some people today already scrolling for dopamine, I'm still worried:
> I can remember putting on the headset for the first time and the computer talking to me and telling me what to do. It was creepy at first, but that feeling really only lasted a day or so. Then you were used to it, and the job really did get easier. Manna never pushed you around, never yelled at you. The girls liked it because Manna didn’t hit on them either. Manna simply asked you to do something, you did it, you said, “OK”, and Manna asked you to do the next step. Each step was easy. You could go through the whole day on autopilot, and Manna made sure that you were constantly doing something. At the end of the shift Manna always said the same thing. “You are done for today. Thank you for your help.” Then you took off your headset and put it back on the rack to recharge. The first few minutes off the headset were always disorienting — there had been this voice in your head telling you exactly what to do in minute detail for six or eight hours. You had to turn your brain back on to get out of the restaurant.
-- https://marshallbrain.com/manna1
It’s entirely reliant on symbols, ie it’s irrelevant in terms of brain-ecology processes.
I can see many people not learning how to write when speech to text gets good enough.
It can’t be LLMs they’re incompatible with thought.
you didn't look at the paper? or you're taking umbrage with the "understanding" part?
It’s not understanding, it’s explanation. I read the paper, I posted it.
Start at what are human explanations:
https://www.alisongopnik.com/Papers_Alison/Explain%20final.p...
Now what are words in relation to that drive?
“We refute (based on empirical evidence) claims that humans use linguistic representations to think.” Ev Fedorenko Language Lab MIT 2024
What are LLMs?
As language or tokens cannot represent anything of merit in brains accurately, then the interpretation of what is semantic vs what is a task variable action potential is subject to the Gopnik problem.
It’s an enforced circularity that never allows the brain/ecology to speak for itself in its native process.
thoughts would be more analogous to the weights in the the LLM, rather than language or tokens as you mention