Given that natural language is ambiguous, what if the LLM makes some mistakes though?

I'm wondering because, it's not like it's a human that can then take accountability/responsibility for that...

I'd say the simple answer is that the buck always stops with the vendor. If Acme Co sells a jetpack that explodes and kills someone, the entity doesn't get to deflect liability by saying one of their engineers made a mistake; that may be an explanation, but not an excuse. Swapping out Acme Co and its employees with your grandma and her AI/robots doesn't change the fundamental principle.