But if AI gets to a level where it could be an input to its own system, and reaches a level where it has systems analogous to humans (long term memory, decision trees updated by new experiences and knowledge, etc.) then does it matter in any meaningful way if it is “the same” or just an imitation of human brains? It feels like it only matters now because AIs are imitating small parts of what human brains do but fall very short. If they could equal or exceed human minds, then the question is purely academic.

That's a lot of really big ifs that we are likely still a long way away from answering

From what I understand there is not really any realistic expectation that LLM based AI will ever reach this complexity