We don't know the path for how a given input produces a given output, but that doesn't mean we don't know how LLMs work.
We don't and can't know with certainty which specific atoms will fission in a nuclear reactor either. But we know how nuclear fission works.
We have the Navier–Stokes equations which fit on a matchbox, yet for the last 25 years there's been a US$1,000,000 prize on offer to the first person providing a solution for a specific statement of the problem:
And when that prize is claimed, we'll ring the bell on AGI being found. Gentleman's agreement.
I don't see how it will convince anyone: people said as much before chess, then again about Go, and are still currently disagreeing with each other if LLMs do or don't pass the Turing test.
Irregardless, this was to demonstrate by analogy that things that seem simple can actually be really hard to fully understand.