The conclusion goes into that glassy-eyed realm of, “what if we’re no better than the algorithm?”
Problem is, we don’t even know what makes us think. So you can jump to any conclusion and nobody could really tell if you’re wrong.
We do know how transformers and layers work. They’re algorithms that crunch numbers. A great deal of numbers. And we can use the training set to generate plausible outputs given some input. Yes, stochastic parrot is a reduction of all the technical sophistication in LLMs. But it’s not entirely baseless. At the end of the day it is copying what’s in the training data. In a very clever way.
However, resist the temptation to believe we understand human brains and human thought. And resist the temptation to anthropomorphize algorithms. It’s data and patterns.