A machine that can, unattended, move the goal post to make itself look special by inventing new forms of expression then would beat "those folks" for good.

Not the best human behavior, but certainly human behavior.

LLMs are built with the intention of mimicry. It's no surprise they look like mimicry. If we show a new trick they can't do, and their makers train it to mimic that, can we be blamed for calling that mimicry... mimicry?

The promise of LLMs isn’t that they appear to be intelligent through mimicry, but that they do understand.

How do LLMs promise to "understand". Broadly speaking, AI/ML can be divided into two groups: mimicary, which is given a corpus of assumed good data and attempts to generalize it, and reinforcement learning where the AI is given a high level fitness function and set loose to optimize it.

The current generation of LLM falls pretty heavy in the mimicary family of AI.

This is a false dichotomy.

You introduce two categories, state that LLMs are not part of one category, and then conclude that it must be in the other. In reality, the distinction between the two classes is not so clear.

The transformer architecture is quite something, and the number of layers and nodes involved in a typical LLM is staggering. This goes way beyond linear regression.