I agree because I'm trying to point out the the over-enthusiasts that if they really reached intelligence it has lots of consequences that they probably don't want. Hence they shouldn't be too eager to declare that the future has arrived.

I'm not sure that a minimal kind of agency is super complicated BTW. Perhaps it's just connecting the LLM into a loop that processes its sensory input to make output continuously? But you're right that it lacks desire, needs etc so its thinking is undirected without a human.