If we are to consider them truly intelligent then they have to have responsibility for what they do. If they're just probability machines then they're the responsibility of their owners.
If they're children then their parents, i.e. creators, are responsible.
They aren't truly intelligent so we shouldn't consider them to be. They're a system that, for a given stream of input tokens predicts the most likely next output token. The fact that their training dataset is so big makes them very good at predicting the next token in all sorts of contexts (that it has training data for anyway), but that's not the same as "thinking". And that's why they get so bizarelly of the rails if your input context is some wild prompt that has them play acting
> If we are to consider them truly intelligent
We aren't, and intelligence isn't the question, actual agency (in the psychological sense) is. If you install some fancy model but don't give it anything to do, it won't do anything. If you put a human in an empty house somewhere, they will start exploring their options. And mind you, we're not purely driven by survival either; neither art nor culture would exist if that were the case.
I agree because I'm trying to point out the the over-enthusiasts that if they really reached intelligence it has lots of consequences that they probably don't want. Hence they shouldn't be too eager to declare that the future has arrived.
I'm not sure that a minimal kind of agency is super complicated BTW. Perhaps it's just connecting the LLM into a loop that processes its sensory input to make output continuously? But you're right that it lacks desire, needs etc so its thinking is undirected without a human.