Reply to Keith Wiley, Interstellar Might Depict AI Slavery, written here because wordpressYour article is beautifully written, and your argument is deeply humanist. But like nearly all sci fi & (weirdly) most philosophy of the ethics of AI, you are writing about what it means to be human, not about AI. I titled that book chapter "Robots Should Be Slaves" because the previous two AI ethics papers I wrote had been largely ignored. This title got the argument some attention, which is good. But if you read the chapter, you'll know the real title should be "Robots WILL Be Slaves". The point of that chapter is: since we build & own robots, we will never accord them equal status. Today[Ferguson verdict] it is once again clear that we don't even accord most of humanity equal status. Here's one example of many of some recent science on the subject: we attribute different emotions and motivations to people just because they are "outgroup", even if they are the same race/class/status -- other people in our own neighbourhood church. If we find ourselves on the other side of a fence on an issue, we treat them less human.
But that isn't really the point you are trying to get at, nor the main point of my AI ethics papers. We are both really interested in what makes something a moral patient -- something that we're obliged to defend. I don't think AI is ever going to be anywhere near as humanlike as many people seem to expect. AI is here now [pdf, sorry, my most recent paper], and it's very different from us, which is no surprise -- so are chimpanzees, and we have a lot more in common with them. But even if AI was exactly like us, what would we have to do to make it OK to treat it badly? I suggest two things:
- 1) Back up AI minds. Even if an AI is in some sense unique (holds the memories of your household, for example), take away its (or our) fear of (its) death.
- 2) Don't have it care about social status.
But it doesn't need to be for something we build. So this isn't "lobotomising", it's not Planet of the Apes, it's not One Flew Over The Cuckoo's Nest. Those were about what you have to remove from a human to stop them from making trouble, and since we are evolved to "make trouble", there's actually no clean way to leave that part out of a human brain. But this is about what you have to leave out of a robot to make it not damaged by the fact it's owned. I don't think that's consciousness, emotions, or awareness -- those are just correlated with moral patiency in humans, but correlation is not causation.
There is one final problem raised by your essay -- the Kantian problem. Even if robots aren't actually deserving of moral patiency, if we believe they are (even unconsciously) then treating them badly will make us worse people, more likely to damage other, real, moral patients. It sounds like this is what Interstellar got "wrong" (though obviously it makes for better fiction to be tantalised as you describe.) That's why my second AI ethics paper [PDF] (one of the ones that got largely ignored, but has since turned up in the EPSRC Principles of Robotics) recommends
- 3) AI designers need to make it apparent that AI is not a moral patient, as well as making that true.