> Pattern matching has a definition in this field, it does mean specific things.
Such as?
> They’re fundamentally wholly understandable systems that work on a consistent level in terms of the how they do what they do (that is separate from the actual produced output)
Multi billion parameter models are definitely not wholly understandable and I don't think any AI researcher would claim otherwise. We can train them but we don't know how they work any more than we understand how the training data was made.
> I think reasoning, as any layman would use the term, is not accurate to what these systems do.
Based on what?
You’re welcoming to provide counters. I think these are all sufficiently common things that they stand on their own as to what I posit
Look, you're claiming something, it's up to you to back it up. Handwaving what any of these things mean isn't an argument.