This is magical thinking.
LLMs are physically incapable of generating something “well thought out”, because they are physically incapable of thinking.
This is magical thinking.
LLMs are physically incapable of generating something “well thought out”, because they are physically incapable of thinking.
Tell Donald Knuth that: https://www-cs-faculty.stanford.edu/~knuth/papers/claude-cyc...
It is magical thinking to claim that LLMs are definitely physically incapable of thinking. You don't know that. No one knows that, since such large neural networks are opaque blackboxes that resist interpretation and we don't really know how they function internally.
You are just repeating that because you read that before somewhere else. Like a stochastic parrot. Quite ironic. ;)