Do you really just complete the next token when speaking? You don't plan ahead, you don't form concepts and then translate them into speech? You don't make models, work with them, carefully consider their interactions, and then work out ways to communicate them?
Because, with a reasonable understanding of how LLMS work, the way that they produce text is nothing like the way that my mind works.
LLMs very probably don't work the same way humans do, but they do appear to both plan ahead and translate internal concepts into speech: https://www.anthropic.com/news/tracing-thoughts-language-mod...
>Do you really just complete the next token when speaking? You don't plan ahead, you don't form concepts and then translate them into speech? You don't make models, work with them, carefully consider their interactions, and then work out ways to communicate them?
A good amount of people spend most of their lives operating that way.