Robots don’t keep humans safe. Humans keep humans safe. An industrial machine stays put in its context and humans can be trained to work around it, and at least notionally consent to be in its presence. A roaming machine means every human nearby needs to be constantly vigilant, and none of us may revoke consent.

In the second case, machine intelligence is supposed to keep us safe. That intelligence is controlled by people or companies that may or may not have the public benefit as motive for their actions. The typical response to that is to legislate, or publicly advocate for change. But what if the entity that controls the robots also controls the laws? That means there’s no way for regular people to revoke consent to the presence of dangerous robots.

So, cleanly, what if the CEO of a self-driving car company donates money to a government that provides it immunity from the actions of its robots? Who do we trust in that case?

I still prefer a world where we solve the robot problem early, with clubs and fire.