No. I see AI people use this reasoning all the time and it's deeply misleading.

"You can't explain how humans do it, therefore you can't prove my statistical model doesn't do it" is kinda just the god of the gaps fallacy.

It abuses the fact that we don't understand how human cognition works, and therefore it's impossible to come up with a precise technical description. Of course you're going to win the argument, if you insist the other party do something currently impossible before you will accept their idea.

It's perfectly fine to use a heuristic for reasoning, as the other person did. LLMs don't reason by any reasonable heuristic.

>No. I see AI people use this reasoning all the time and it's deeply misleading. "You can't explain how humans do it, therefore you can't prove my statistical model doesn't do it" is kinda just the god of the gaps fallacy.

No, this is 'stop making claims you cannot actually support'.

>It abuses the fact that we don't understand how human cognition works, and therefore it's impossible to come up with a precise technical description.

Are you hearing yourself ? If you don't understand how human cognition works then any claims what is and isn't cognition should be taken with less than a grain of salt. You're in no position to be making such strong claims.

If you go ahead and make such claims, then you can be hardly surprised if people refuse to listen to you.

And by the way, we don't understand the internals of Large Neural Networks much better than human cognition.

>It's perfectly fine to use a heuristic for reasoning

You can use whatever heuristic you want and I can rightly tell you it holds no more weight than fiction.