Is it just the wrong choice of word? There's nothing supernatural about a system moving towards increased capabilities and picking up self-awareness on the way; that happened in the natural world. Nothing supernatural about technology improving faster than evolution either. If they meant "ill-defined" or similar, sure.

> picking up self-awareness on the way

To me, the first problem is that "self-awareness" isn't well-defined - or, conversely, it's too well defined because every philosopher of mind has a different definition. It's the same problem with all these claims ("intelligent", "conscious"), assessing whether a system is self-aware leads down a rabbit hole toward P-Zombies and Chinese Rooms.

I believe we can mostly elide that here. For any "it", if we have it, machines can have it too. For any useful "it", if a system is trying to become more useful, it's likely they'll get it. So the only questions are "do we have it?" and "is it useful?". I'm sure there are philosophers defining self-awareness in a way that excludes humans, and we'll have to set those aside. And definitions will have varying usefulness, but I think it's safe to broadly (certainly not exhaustively!) assume that if evolution put work into giving us something, it's useful.

>There's nothing supernatural about a system moving towards increased capabilities and picking up self-awareness on the way

There absolutely is if you handwave away all the specificity. The natural world runs on the specificity of physical mechanisms. With brains, in a broad brush way you can say self-awareness was "picked up along the way", but that's because we've done an incredible amount of work building out the evolutionary history and building out our understanding of specific physical mechanisms. It is that work that verifies the story. It's also something we know is already here and can look back at retrospectively, so we know it got here somehow.

But projecting forward into a future that hasn't happened, while skipping over all the details doesn't buy you sentience, self-awareness, or whatever your preferred salient property is. I understand supernatural as a label for a thing simply happening without accountability to naturalistic explanation, which is a fitting term for this form of explanation that doesn't do any explaining.

If that's the usage of supernatural then I reject it as a dismissal of the point. Plenty of things can be predicted without being explained. I'm more than 90% confident the S&P 500 will be up at least 70% in the next 10 years because it reliably behaves that way; if I could tell you which companies would drive the increase and when, I'd be a billionaire. I'm more than 99% confident the universe will increase in entropy until heat death, but the timeline for that just got revised down 1000 orders of magnitude. I don't like using a word that implies impossible physics to describe a prediction that an unpredictable chaotic system will land on an attractor state, but that's semantics.

I think you're kind of losing track of what this thread was originally about. It was about the specific idea that hooking up a bunch of AI's to interface with each other and engage in a kind of group collaboration gets you "self awareness". You now seem to be trying to model this on analogies like the stock market or heat death of the universe, where we can trust an overriding principle even if we don't have specifics.

I don't believe those forms of analogy work here, because this isn't about progress of AI writ large but about a narrower thing, namely the idea that the secret sauce to self-awareness is AI's interfacing with each other and collaboratively self-improving. That either will or won't be true due to specifics about the nature of self-improvement and whether there's any relation between that and salient properties we think are important for "self-awareness". Getting from A to B on that involves knowledge we don't have yet, and is not at all like a long-term application of already settled principles of thermodynamics.

So it's not like the heat death of the universe, because we don't at all know that this kind of training and interaction is attached to a bigger process that categorically and inexorably bends toward self-awareness. Some theories of self-improvement likely are going to work, some aren't, some trajectories achievable and some not, for reasons specific to those respective theories. It may be that they work spectacularly for learning, but that all the learning in the world has nothing to do with "self awareness." That is to say, the devil is in the details, those details are being skipped, and that abandonment of naturalistic explanation merits analogy to supernatural in it's lack of accountability to good explanation. If supernatural is the wrong term for rejecting, as a matter of principle, the need for rational explanation, then perhaps anti-intellectualism is the better term.

If instead we were talking about something really broad, like all of the collective efforts of humanity to improve AI, conceived of as broadly as possible over some time span, that would be a different conversation than just saying let's plug AI's into each other (???) and they'll get self-aware.

>I think you're kind of losing track of what this thread was originally about.

Maybe I am! Somebody posed a theory about how self-improvement will work and concluded that it would lead to self-awareness. Somebody else replied that they were on board until the self-awareness part because they considered it supernatural. I said I don't think self-awareness is supernatural, and you clarified that it might be the undefined process of becoming self-aware that is being called supernatural. And then I objected that undefined processes leading to predictable outcomes is commonplace, so that usage of supernatural doesn't stand up as an argument.

Now you're saying it is the rest of the original, the hive-mindy bits, that are at issue. I agree with that entirely, and I wouldn't bet on that method of self-improvement at 10% odds. My impression was that that was all conceded right out of the gate. Have I lost the plot somewhere?

But how does self-awareness evolve in biological systems, and what would be the steps for this to happen with AI models? Just making claims about what will happen without explaining the details is magical reasoning. There's a lot of that going on the AGI/ASI predictions.

We may never know the truth of Qualia, but there are already potential pathways to achieve mind uploading -- https://dmf-archive.github.io

Given that we have no freaking clue of where self awareness comes from even in humans, expecting a machine to evolve the same capability by itself is pure fantasy.