It's a bold claim for sure, and not one that I agree with, but not one that's facially false either. We're approaching a point where we will stop having easy answers for why computer systems can't have subjective experience.
You're conflating state and consciousness. Clawbots in particular are agents that persist state across conversations in text files and optionally in other data stores.
It sounds like we're in agreement. Present-day AI agents clearly maintain state over time, but that on its own is insufficient for consciousness.
On the other side of the coin though, I would just add that I believe that long-term persistent state is a soft, rather than hard requirement for consciousness - people with anterograde amnesia are still conscious, right?
Current agents "live" in discretized time. They sporadically get inputs, process it, and update their state. The only thing they don't currently do is learn (update their models). What's your argument?
Yes, but the state is just the prompt and the text already emitted.
You could assert that text can encode a state of consciousness, but that's an incredibly bold claim with a lot of implications.
It's a bold claim for sure, and not one that I agree with, but not one that's facially false either. We're approaching a point where we will stop having easy answers for why computer systems can't have subjective experience.
You're conflating state and consciousness. Clawbots in particular are agents that persist state across conversations in text files and optionally in other data stores.
I am not sure how to define consciousness, but I can't imagine a definition that doesn't involve state or continuity across time.
It sounds like we're in agreement. Present-day AI agents clearly maintain state over time, but that on its own is insufficient for consciousness.
On the other side of the coin though, I would just add that I believe that long-term persistent state is a soft, rather than hard requirement for consciousness - people with anterograde amnesia are still conscious, right?
Current agents "live" in discretized time. They sporadically get inputs, process it, and update their state. The only thing they don't currently do is learn (update their models). What's your argument?