I think it would turn off, no shocker there. I'm not sure what you mean, can you elaborate?
When I say autonomous I don't mean some high-falutin philosophical concept, I just mean it does stuff on it's own.
I think it would turn off, no shocker there. I'm not sure what you mean, can you elaborate?
When I say autonomous I don't mean some high-falutin philosophical concept, I just mean it does stuff on it's own.
Right, but it doesn't. It stops once you stop forcing it to do stuff.
I still don't understand your point, sorry. If it's a semantic nitpick about the meaning of "autonomous", I'm not interested - I've made my definition quite clear, and it has nothing to do with when agents stop doing things or what happens when they get turned off.
I think you should start caring about the meaning of words.
I do, when I think it's relevant. Words don't have an absolute meaning - I've presented mine.
Because that's what they're created to do. You can make a system which runs continuously. It's not a tech limitation, just how we preferred things to work so far.
Maybe, but that's not the case here so it is lost on me why you bring it up.
You're making claims about those systems not being autonomous. When we want to, we create them to be autonomous. It's got nothing to do with agency or survival instincts. Experiments like that have been done for years now - for example https://techcrunch.com/2023/04/10/researchers-populated-a-ti...
Yes, because they aren't. Against your fantasy that some might be brought into existence sometime in the future I present my own fantasy that there won't be.
I linked you an experiment with multiple autonomous agents operating continuously. It's already happened. It's really not clear what you're disagreeing with here.
No, that was a simulation, akin to Conway's cellular automata. You seem to consider being fully under someone else's control to qualify as autonomy, at least in certain casees, which to me comes across as very bizarre.