Not to dismiss other people's experience, but thinking improves thinking. People tend to forget that you can ask yourself questions and try to answer them. There is such thing as recursive thinking where you end up with a new thought you didn't have before you started.
Don't dismiss this superpower you have in your own head.
In my experience LLMs offer two advantages over private thinking:
1) They have access to a vast array of extremely well indexed knowledge and can tell me about things that I'd never have found before.
2) They are able to respond instantly and engagingly, while working on any topic, which helps fight fatigue, at least for me. I do not know how universal this effect is, but using them often means that I can focus for longer. I can also make them do drudgery, like refactoring 500 functions in mostly the same way that is just a little bit too complicated for deterministic tools to do, which also helps with fatigue.
Ideally, they'd also give you a more unique perspective or push-back when appropriate, but they are yes-men too much right now for that to be the case.
Lastly, I am not arguing to not do private thinking too. My argument is that LLM-involved thinking is useful as its own thing.
Re: "yes men" - critical thinking always helps. I kind of treat their responses like a random written down shower thought - malicious without scrutiny. Same with anything that you haven't gone over properly, really.
The advantages that you listed make them worth it.
The output of the prompts always needs peer review, scrutiny. The longer is the context, the longer it will deviate, like if a magnet were put nearer and nearer to a navigation compass.
This is not new, as LLMs root are statistics, data compression with losses, It is statistically indexed data with text interface.
The problem is someones are selling to people this as the artificial intelligence they watched at movies, and they are doing it deliberately, calling hallucinations to errors, calling thinking to keywords, and so on.
There is a price to pay by the society for those fast queries when people do not verify such outputs/responses, and, unfortunately, people is not doing it.
I mean, it is difficult to say. When I hear some governments are thinking in to use LLMs within the administrations I get really concerned, as I know those outputs/responses/actions will nor be revised nor questioned.
[flagged]
No one is arguing that thinking doesn’t improve thinking. But expressing thoughts precisely by formulating them into the formalized system of the written word adds a layer of metacognition and effort to the thinking process that simply isn’t there when 'just' thinking in your head. It’s a much more rigorous form of thinking with more depth - which improves deeper, more effortful thinking.
Exactly. As distributed systems legend Leslie Lamport puts it: “Writing is nature’s way of letting you know how sloppy your thinking is.” (He added: “Mathematics is nature’s way of letting you know how sloppy your writing is.”)
I still have a lot of my best ideas in the shower, no paper and pen, no LLM to talk to. But writing them down is the only way to iron out all the ambiguity and sort out what’s really going to work and what isn’t. LLMs are a step up from that because they give you a ready-made critical audience for your writing that can challenge your assumptions and call out gaps and fuzziness (although as I said in my other comment, make sure you tell them to be critical!)
Thinking is great. I love it. And there are advantages to not involving LLMs too early in your process. But it’s just a first step and you need to write your ideas down and submit them to external scrutiny. Best of all for that is another person who you trust to give you a careful and honest reading, but those people are busy and hard to find. LLMs are a reasonable substitute.
Agreed; also a kind of recursive placebo tends to happen in my experience:
You definitely don't need LLMs for this.But you also do not need paper to think, but surely much of modern physics would not have happened without paper or blackboards…
There is nothing to suggest LLMs will be as revolutionary as paper. The PalmPilot didn't lead to new field of science just because people had a new way to write things down.
The internet is as arguably as revolutionary as paper. And while LLMs haven’t proven to be an internet-level revolutionary technology (yet), they are closer to that than the PalmPilot.
As cognitive-offloading devices go, paper is completely neutral. It doesn't flatter you into believing you are a genius when you're not; it doesn't offer to extend your reasoning or find references in the research literature and then hallucinate and lead you astray; it will never show you advertisements for things you don't need; it will never leak your ideas and innermost thoughts to corporations owned by billionaires and totalitarian governments... I could go on but you get the drift, I'm sure. Paper wins by a mile.
Really pessimistic and mostly incorrect understanding of LLMs. No they don’t flatter you, try using ChatGPT once.
No they don’t hallucinate that much.
Since paper this is one of the most important inventions. It has almost infinite knowledge and you can ask it anything mostly.
? They certainly flatter you, openAI even felt compelled to give a statement on the sycophancy problem: https://openai.com/index/sycophancy-in-gpt-4o/ And South Park parodied the issue. I use chatGPT and claude every day.
the new models don't do it
> No they don’t flatter you, try using ChatGPT once.
You're absolutely right!
On a more serious note, if it has almost infinite knowledge, is it even a cognitive-offloading tool in the same class as paper? Sounds more like something designed to stifle and make my thoughts conform to its almost infinite knowledge.
edit: I'll admit ChatGPT is a great search engine (and also very hallucinatory depending on how much you know about the subject) and maybe it helps some people think, sure. But beyond a point I find it actually harmful as a means to develop my own ideas.
There’s no winning is there?
Kranner: GOTO as used in Thinking considered Beneficial.
I almost entirely agree with you, but the issue is that the information you currently have might not be enough to get the answers you want through pure deduction. So how do you get more information?
I think chatbots are a very clumsy way to get information. Conversations tend to be unfocused until you, the human, take an interest in something more specific and pursue it. You're still doing all the work.
It's also too easy to believe in the hype and think it's at least better than talking to another person with more limited knowledge. The fact is talking has always sucked. It's slow, but a human is still better because they can deduce in ways LLMs never will. Deduction is not mere pattern matching or correlation. Most key insights are the result of walking a long tight rope of deductions. LLMs are best at summarizing and assisting with search when you don't know where to start.
And so we are still better off reading a book containing properly curated knowledge, thinking about it for a while, and then socializing with other humans.
No I don’t think humans have some magical metaphysical deduction capability that LLMs lack exclusively.
I have had conversations and while they don’t have the exact attentiveness of a human, they get pretty close. But what they do have an advantage in is being an expert in almost any field.
Yes, LLMs have been a very expensive philosophy lesson for many investors. Ancient epistemology debates are now front and center for everyone to see. So-called "formal epistemology" is just empiricism in disguise attempting to borrow the credibility of rationalism and failing miserably.
LLMs are Bayesian inference and come with all its baggage. We definitely know brains are way better than that, even of other animals or insects.
Ultimately, there's no point in getting a chatbot to say deceptively expert-like words that are guaranteed by design to be lower quality than the books or blogs it learned from. LLMs are at best a search tool for those sources, and investor attitude now reflects that sanity with their confidence shifting back over to Google's offerings. Agentic AI is also pretty weak since agents are as functionally limited as any traditionally written computer program, but lacking the most crucial property of repeatability.
I find it shocking how many people didn't see this whole thing as a grift from day one. What else was SV going to do during the post-covid economic slump?
See also rubberducking [1]
I've seen people solve their own issues by asking me / telling me about something and finding the solution without me having the time to reply numerous times.
Just articulating your thoughts (and using more of your brain on them by voicing them) helps a lot.
Some talk to themselves out loud and we are starting to realize it actually helps.
[1] https://en.wikipedia.org/wiki/Rubber_duck_debugging
Just like how writing helps memorisation. Our brains are efficient, they only do what they have to do. Just like you won't build much muscles from using forklifts.
I've seen multiple cases of... inception. Someone going all in with ChatGPT and what not to create their strategy. When asked _anything_ about it, they defended it as if they came up with it, but could barely reason about it. Almost as if they were convinced it was their idea, but it really wasn't. Weird times.
> thinking improves thinking
Indeed, people trying to write prompts for the chatbots and continuously iterating on making their prompts clearer / more effective at conveying their needs is an exercise many haven't done since highschool. Who would've thought that working on your writing and reading proficiency may improve your thinking.
I can imagine for some it's quite a challenge to deviate from short-form shitposting they normally do and formulate thoughts in complete sentences for the LLMs.
I'm mystified by this comment. Do people really forget that they can think in their own mind?
I think there’s a subset of people who don’t have an inner voice. I assume thinking step by step in their head doesn’t work like most people.
I’m glad LLMs help these people. But I’m not gonna trade society because a subset of people can’t write things down.
Wat.
Maybe read up on what having an "inner voice" or not actually does first? Before making, frankly, weird and unfounded takes on the subject.
What GP said is actually pretty well known (https://www.reddit.com/r/AskReddit/comments/fpaaud/people_wh...).
Why are you asking other people to read up on something you clearly haven't read up on yourself?
@grok is this true?
Don’t be mystified if you lack to curiosity to understand how to use new technology. It’s useful to have something to speak to and get feedback on.
— john asked to himself.
It’s as if people are rediscovering that writing is thinking. The chatbot is irrelevant, it works even better with a paper notebook.
Recursive self-questioning predates external tools and is already well known. What is new is broad access to a low cost, non retaliatory dialogic interface that removes many social, sexual, and status pressures. LLMs do not make people think. They reduce interpersonal distortions that often interfere with thinking. That reduction in specific social biases (while introducing model encoded priors) is what can materially improve cognition for reflective and exploratory tasks.
Simply, when thinking hits a wall, we can now consult a machine via conversation interface lacking conventional human social biases. That is a new superpower.
Unfortunately we do neglect more and more of our own innate talents. Imagine sitting there just thinking, without even a reMarkable to keep notes? Do people even trust their memory beyond their immediate working memory?
It's also absolute awesome how every person's brain works the same way. It makes it some much more convenient that what works for one person works for every person.
When I was a kid people told me I needed no Chess Computer - You can play chess in your head, you know ? I really tried, no luck. Got a mediocre device for Christmas, couldn't beat it for a while, couldn't lose against it soon after. Won some tournaments in my age group and beyond. Thought there must be more interesting problems to solve, got degrees in Math, Law and went into politics for a while. Friends from College call on your birthday, invite you to their weddings, they work on problems in medicine, economics, niches of math you've never heard of - you listen, a couple of days later, you wake up from a weird dream and wonder, ask Opus 4.5/Gemini 3.0 deepthink some questions, call them back: "did you try X ?" they tell you, that they always considered you a genius. You feel good about yourself for a moment before you remember that Von Neumann needed no LLMs and that José Raúl Capablanca died over half a decade before Turing wrote down the first algorithm for a Chess Computer. An Email from a client pops up, he isn't gonna pay your bill unless you make one more modification to that CRUD app. You want to eat and get back to work. Can't help but think about Eratosthenes who needed neither glasses nor telescopes to figure out the earths circumference. Would he have marvelled at the achievements of Newton and his successors at NASA or made fun of those nerds that needed polished pieces of glass not only to figure out the mysteries of the Universe but even for basic literacy.
The way most people think is by talking to each other but writing is a stronger way to think and writing to an LLM or with the help of an LLM has some of the benefits of talking with someone. Also, writing and sketchingon a piece of paper have unique advantages.
[dead]
Writing improves thinking, and when uses correctly, LLMs can increase the rate at which one writes, journals and refines their thoughts.