No, it's called functionalism. To me, it's actually the opposite, assuming there is a fundamental difference between simulated neurons and real ones seems almost religious.

While it's true that we aren't there yet, and simulated neurons are currently quite different from real ones (so I agree there is a big difference at the moment), it's unclear why you presumably think it will always stay that way.

If you actually have a way to fully, without reductions, simulate matter, that's probably a Nobel prize coming your way.

The common scientific understanding is that this is not possible, at least not without extreme amounts of energy and time.

The dimensionality, or complexity if you'd prefer, of your logic gates is quite different from the cosmos. You might not agree but in my parlance a linear and a fractal curve are fundamentally different, and you can try to use linear curves to approximate the latter at some level of perspective if you want but I don't think you'll get a large audience claiming that there is no difference.

As far as I know we've also kind of given up on simulating neurons and settled for growing and poking real ones instead, but you might have some recent examples to the contrary?

We may not need to go down that level.

For the qualities we care about, it may turn out to be the case we don't need to simulate matter perfectly. We may not need to concern ourselves with the fractal complexity of reality if we identify the right higher level abstractions with which to operate on. This phenomenon is known as causal emergence.

> That is, a macroscale description of a system (a map) can be more informative than a fully detailed microscale description of the system (the territory). This has been called “causal emergence.”

https://www.mdpi.com/1099-4300/19/5/188

From a HN discussion a while ago:

https://www.quantamagazine.org/the-new-math-of-how-large-sca...

> A highly compressed description of the system then emerges at the macro level that captures those dynamics of the micro level that matter to the macroscale behavior — filtered, as it were, through the nested web of intermediate ε-machines. In that case, the behavior of the macro level can be predicted as fully as possible using only macroscale information — there is no need to refer to finer-scale information. It is, in other words, fully emergent. The key characteristic of this emergence, the researchers say, is this hierarchical structure of “strongly lumpable causal states.”

Who are "we", and why would I care about them here?

There are situations where approximations are good enough for simulations, sure, but that's not the subject here.

I reject the idea that chatbots have feelings or intellect because they output text that is similar to what a human might write in some hypothetical situation or other. To the extent that they can have those properties, it is to the same extent as Clark Kent can, if one were to accept such a conflatory and confused discourse.