When people do math or rigorous deductive reasoning, are we sure they aren't just pattern matching with a set of carefully chosen interacting patterns that have been refined by ancient philosophers as being useful patterns that produce consistent results when applied in correctly patterned ways?

I've often wondered this. I suspect not, though I don't know. You're right that the answer matters to understanding LLM limitations relative to humans, though.