I had an experience the other day where claude code wrote a script that shelled out to other LLM providers to obtain some information (unprompted by me). More often it requests information from me directly. My point is that the environment itself for these things is becoming at least as computationally complex or irreducible (as the OP would say) as the model's algorithm, so there's no point trying to analyse these things in isolation.
I suspect there's a harsher argument to be made regarding "autonomous". Pull the power cord and see if it does what a mammal would do, or if it rather resembles a chaotic water wheel.
I still don't understand your point, sorry. If it's a semantic nitpick about the meaning of "autonomous", I'm not interested - I've made my definition quite clear, and it has nothing to do with when agents stop doing things or what happens when they get turned off.
Because that's what they're created to do. You can make a system which runs continuously. It's not a tech limitation, just how we preferred things to work so far.
You're making claims about those systems not being autonomous. When we want to, we create them to be autonomous. It's got nothing to do with agency or survival instincts. Experiments like that have been done for years now - for example https://techcrunch.com/2023/04/10/researchers-populated-a-ti...
Yes, because they aren't. Against your fantasy that some might be brought into existence sometime in the future I present my own fantasy that there won't be.
I linked you an experiment with multiple autonomous agents operating continuously. It's already happened. It's really not clear what you're disagreeing with here.
No, that was a simulation, akin to Conway's cellular automata. You seem to consider being fully under someone else's control to qualify as autonomy, at least in certain casees, which to me comes across as very bizarre.
Into their short term memory (context). Some information is also stored in long term memory (user store)
I can see what you are getting at but consider:
I had an experience the other day where claude code wrote a script that shelled out to other LLM providers to obtain some information (unprompted by me). More often it requests information from me directly. My point is that the environment itself for these things is becoming at least as computationally complex or irreducible (as the OP would say) as the model's algorithm, so there's no point trying to analyse these things in isolation.
Truthfully, few people know that right now!
They're backfeeding what it's "learning" along the way - whether it's in a smart fashion, we don't know yet.
I suspect there's a harsher argument to be made regarding "autonomous". Pull the power cord and see if it does what a mammal would do, or if it rather resembles a chaotic water wheel.
> Pull the power cord and see if it does what a mammal would do
Pulling the power cord on a mammal means shutting off its metabolism. That predictably kills us.
I think it would turn off, no shocker there. I'm not sure what you mean, can you elaborate?
When I say autonomous I don't mean some high-falutin philosophical concept, I just mean it does stuff on it's own.
Right, but it doesn't. It stops once you stop forcing it to do stuff.
I still don't understand your point, sorry. If it's a semantic nitpick about the meaning of "autonomous", I'm not interested - I've made my definition quite clear, and it has nothing to do with when agents stop doing things or what happens when they get turned off.
I think you should start caring about the meaning of words.
I do, when I think it's relevant. Words don't have an absolute meaning - I've presented mine.
Because that's what they're created to do. You can make a system which runs continuously. It's not a tech limitation, just how we preferred things to work so far.
Maybe, but that's not the case here so it is lost on me why you bring it up.
You're making claims about those systems not being autonomous. When we want to, we create them to be autonomous. It's got nothing to do with agency or survival instincts. Experiments like that have been done for years now - for example https://techcrunch.com/2023/04/10/researchers-populated-a-ti...
Yes, because they aren't. Against your fantasy that some might be brought into existence sometime in the future I present my own fantasy that there won't be.
I linked you an experiment with multiple autonomous agents operating continuously. It's already happened. It's really not clear what you're disagreeing with here.
No, that was a simulation, akin to Conway's cellular automata. You seem to consider being fully under someone else's control to qualify as autonomy, at least in certain casees, which to me comes across as very bizarre.