Reply to Keith Wiley, "Interstellar Might Depict AI Slavery"

Reply to Keith Wiley, Interstellar Might Depict AI Slavery, written here because wordpress
Your article is beautifully written, and your argument is deeply humanist. But like nearly all sci fi & (weirdly) most philosophy of the ethics of AI, you are writing about what it means to be human, not about AI. I titled that book chapter "Robots Should Be Slaves" because the previous two AI ethics papers I wrote had been largely ignored. This title got the argument some attention, which is good. But if you read the chapter, you'll know the real title should be "Robots WILL Be Slaves". The point of that chapter is: since we build & own robots, we will never accord them equal status. Today[Ferguson verdict] it is once again clear that we don't even accord most of humanity equal status. Here's one example of many of some recent science on the subject: we attribute different emotions and motivations to people just because they are "outgroup", even if they are the same race/class/status -- other people in our own neighbourhood church. If we find ourselves on the other side of a fence on an issue, we treat them less human.
But that isn't really the point you are trying to get at, nor the main point of my AI ethics papers. We are both really interested in what makes something a moral patient -- something that we're obliged to defend. I don't think AI is ever going to be anywhere near as humanlike as many people seem to expect. AI is here now [pdf, sorry, my most recent paper], and it's very different from us, which is no surprise -- so are chimpanzees, and we have a lot more in common with them. But even if AI was exactly like us, what would we have to do to make it OK to treat it badly? I suggest two things:
  • 1) Back up AI minds. Even if an AI is in some sense unique (holds the memories of your household, for example), take away its (or our) fear of (its) death.
  • 2) Don't have it care about social status. 
 We evolved animals spend all our spare time exploring ourselves and our relative merits -- how good are we? How much time and attention will our friends give us? How many followers can we get? That's a natural urge, part of the survival of the fittest thing -- evolution wants us to know which genes (and maybe which memes) we should be working to reproduce. That's how evolution works. So we get depressed (less active) when we have evidence we are inferior. Like when we serve. Though various religions and other lifestyle hacks try to ameliorate that, by taking the blame for your status away from you, the fact is being low status is unhealthy.
But it doesn't need to be for something we build. So this isn't "lobotomising", it's not Planet of the Apes, it's not One Flew Over The Cuckoo's Nest. Those were about what you have to remove from a human to stop them from making trouble, and since we are evolved to "make trouble", there's actually no clean way to leave that part out of a human brain. But this is about what you have to leave out of a robot to make it not damaged by the fact it's owned. I don't think that's consciousness, emotions, or awareness -- those are just correlated with moral patiency in humans, but correlation is not causation.
There is one final problem raised by your essay -- the Kantian problem. Even if robots aren't actually deserving of moral patiency, if we believe they are (even unconsciously) then treating them badly will make us worse people, more likely to damage other, real, moral patients. It sounds like this is what Interstellar got "wrong" (though obviously it makes for better fiction to be tantalised as you describe.) That's why my second AI ethics paper [PDF] (one of the ones that got largely ignored, but has since turned up in the EPSRC Principles of Robotics) recommends
  • 3) AI designers need to make it apparent that AI is not a moral patient, as well as making that true. 
With these three guidelines in place, then yes, I think AI "Slavery" (ownership) is ethical.

Comments

Keith Wiley said…
This comment has been removed by the author.
Keith Wiley said…
I wrote a comment, submitted it, and then it never appeared. That's very annoying.

Here's a retyping, but shorter: I agree with practically everything you said, except "I don't think AI is ever going to be anywhere near as humanlike as many people seem to expect." I agree that modern topic-specific AI consists of artificial algorithms that don't act like humans. They are no AGI, just AI. AGI might also be achievable that way and then not be human like, but there is another way too: Massive neural nets. If we build artificial NNs that approximate the brain's physiology, topology, etc. then we simply don't know if they will be essentially human. We haven't encountered such systems yet and can't predict the degree of their humanness. They might turn out to be be quite human in fact.

You propose that we not create such systems in the first place to make it clear that AI doesn't need human rights...but of course, that's just advice: someone's gonna build it! :-)

Anyway, I basically agree with your response and really appreciate you taking the time to write it.

Cheers!

(I'm copying this to my clipboard this time, just in case.)
Joanna Bryson said…
Re: the comment thing – happened to me three times on the Wordpress site, but I've learned not to trust comment fields and always copy my comments out before submitting.

I'm guessing this is Keith? I'm afraid I'm not a big believer in the brain scanning / mind uploading version of AI heaven: our brains' connections only make sense in the context of the bodies they are integrated with, so even if it were technically feasible you would need to clone the entire body. So I'm not very worried anyone will build it. I'm way more worried people will pretend to have built it. See the Dennett & Norvig interview linked on my AI ethics page http://www.cs.bath.ac.uk/~jjb/web/ai.html
Joanna Bryson said…
This is my favourite mind uploading article (well, I like the discussion!) http://www.worldscientific.com/doi/abs/10.1142/S179384301240015X