We understand and build the trellis that the LLMs "grow" on. We don't have good insight into how a fully grown LLM actually turns any specific input into any specific output. We can follow it through the network, but it's a totally senseless noisy mess.

"Cat" lights up a certain set of neurons, but then "cat" looks completely different. That is what we don't really understand.

(This is an illustrative example made for easy understanding, not something I specifically went and compared)

We don't know the path for how a given input produces a given output, but that doesn't mean we don't know how LLMs work.

We don't and can't know with certainty which specific atoms will fission in a nuclear reactor either. But we know how nuclear fission works.

We have the Navier–Stokes equations which fit on a matchbox, yet for the last 25 years there's been a US$1,000,000 prize on offer to the first person providing a solution for a specific statement of the problem:

  Prove or give a counter-example of the following statement:

  In three space dimensions and time, given an initial velocity field, there exists a vector velocity and a scalar pressure field, which are both smooth and globally defined, that solve the Navier–Stokes equations.

And when that prize is claimed, we'll ring the bell on AGI being found. Gentleman's agreement.

I don't see how it will convince anyone: people said as much before chess, then again about Go, and are still currently disagreeing with each other if LLMs do or don't pass the Turing test.

Irregardless, this was to demonstrate by analogy that things that seem simple can actually be really hard to fully understand.