> picking up self-awareness on the way

To me, the first problem is that "self-awareness" isn't well-defined - or, conversely, it's too well defined because every philosopher of mind has a different definition. It's the same problem with all these claims ("intelligent", "conscious"), assessing whether a system is self-aware leads down a rabbit hole toward P-Zombies and Chinese Rooms.

I believe we can mostly elide that here. For any "it", if we have it, machines can have it too. For any useful "it", if a system is trying to become more useful, it's likely they'll get it. So the only questions are "do we have it?" and "is it useful?". I'm sure there are philosophers defining self-awareness in a way that excludes humans, and we'll have to set those aside. And definitions will have varying usefulness, but I think it's safe to broadly (certainly not exhaustively!) assume that if evolution put work into giving us something, it's useful.