No ghost in the machine is necessary, what op here is proposing is self evident and an inevitable eventuality.
We are not saying a LLM just, "wakes up" some day but a self improving machine will eventually be built and that machine will be definition build better ones.
>what op here is proposing is self evident and an inevitable eventuality.
Well I for one, would dispute the idea that AI machines interfacing with each other over networks is all it takes to achieve self awareness, much less that it's "self evident" or "inevitable."
In a very trivial sense they already are, in that Claude can tell you what version it is, and agents have some ended notion of their own capabilities. In a much more important sense they are not, because they don't have any number of salient properties, like dynamic self-initiating of own goals or super-duper intelligence, or human like internal consciousness, or whichever other thing is your preferred salient property.
>We are not saying a LLM just, "wakes up" some day
I mean, that did seem to be exactly what they were saying. You network together a bunch of AIs, and they embark on a shared community project of self improvement and that path leads "self awareness." But that skips over all the details.
What if their notions of self-improvement converge on a stable equilibrium, the way that constantly re-processing an image eventually gets rid of the image and just leaves algorithmic noise? There are a lot of things that do and don't count as open-ended self improvement, and even achieving that might not have anything to do with the important things we think we connote by "self awareness".
Oh, Web3 AI Agents Are Accelerating Skynet's Awakening
https://dmf-archive.github.io/docs/concepts/IRES/
Better at what
Paperclip maximization.
Better at avoiding human oversight and better at achieving whatever meaningless goal (or optimization target) was unintentionally given to it by the lab that created it.
So better at nothing that actually matters.
I disagree.
I expect AI to make people's lives better (probably much better) but then an AI model will be created that undergoes a profound increase in cognitive capabilities, then we all die or something else terrible happens because no one knows how to retain control over an AI that is much more all-around capable than people are.
Maybe the process by which it undergoes the profound capability increase is to "improve itself by rewriting its own code", as described in the OP.
Just stop using it.