Yesterday Kathleen Richards spoke to amoni. She is an anthropologist concerned with the violence humans do to each other, and she studies our attitudes towards robots as one case of that. I liked her talk overall, but I disputed one thing towards the beginning: she uses the same verb "make" for how we produce both children and robots.
We don't make children in the sense that we make robots. We do produce children from almost nothing, and they would die without our support. We educate and shape them. But we don't author them, as we do robots. We can't determine how many limbs they have, whether they have tentacles or wings or wheels, or what kind of light they can sense. Sure, both humans and robots are constrained by the laws of physics and computation. But within those limits, we can define everything about a robot, and next to nothing about a child. We can perfectly determine a robot's goals and desires, even what it is capable of knowing. This is why building and programming a robot is more like writing a novel or building a house than like having children.
A child is shaped a bit by their parents' will, and more by their parents' culture, but even more by their parents' phylogeny – their genetic, animal heritage. That's what makes children like us. They are designed by evolution to become and replace us. They will be our equivalents, or in fact on average, slightly superior to us, but mostly effectively the same. They will carry on most of our genes, culture and ideas with a few new innovations of their own. Because they are the same kinds of animals as us, they will share many of our feelings, and therefore understand our experience and value our cultural heritage.
We think robots are like us because they have language and reason, and we've spent the vast majority of our history thinking about how we are not like other animals. There has never been something else inhuman with language and reason before, but now there is. But artefacts are not animals. We have total control over their form and function in a way that is not possible for evolved life.
I was talking about this with the Guardian reporter Alex Hern this morning (for an upcoming podcast). He brought up the case where Jeffry van der Goot's Twitter bot randomly tweeted “I seriously want to kill people”. Hern said that van der Goot thought he could not be responsible for what a bot did randomly. Completely wrong. van der Goot is not responsible for intending to kill, but he is responsible for making a threat. Whether he did it himself or via a program he wrote is irrelevant, though of course the context in which it was expressed (was the twitter account clearly a joke or spoof?) is very relevant.
Similarly, this afternoon I was attending a board trying to create an ISO standard for robot ethics, and there was a discussion of whether humans trusting robots and robots trusting humans are symmetric cases, whether "trust" could be a single line in the policy. It rapidly emerged not. With respect to humans trusting robots, it was clear that we were recommending robots to be designed to be trustworthy, and therefore the issues we wished to address were when a robot's behaviour might be apparently inconsistent or for some other reason it might lose the trust it needs to function well, for example in a collaborative setting. It's not a problem when humans lose trust in a robot that's broken or misbehaving, that's as it should be. With respect to humans, we must in contrast assume that they may by their nature be sometimes unreliable, so the issue for robot engineers is detecting and handling this case. We don't need a robot to form a bond of trust to a human, we just need it to have or form a model of human behaviour. A model will not only let the robot better anticipate and collaborate with a human, it also allows the robot to notice when the human is doing something unexpected or unacceptable.
These asymmetries arise from robots being fully authored by humans. Human beings are in contrast both constrained and varied by nature. We are also the ultimate moral subjects – in fact, the entire concept of moral subject evolved and developed to preserve our societies.
I hope that our experience of robots helps us realise how much we are like other animals. By which I do not mean coming to terms with our bad or irrational side. Non-human animals get a bad rap; they are often cooperative and generous, as well as often being violent and abusive, just like us. Understanding our own nature, and nature more generally, might not only help us help ourselves, it might also help us come up with a better model of exactly how much resource we want to dedicate to artefacts participating in our culture.
Addendum: in October 2015, Lantz Flemming Miller published a formal spelling out of an argument very like the above, and in a human rights journal: Granting Automata Human Rights: Challenge to a Basis of Full-Rights Privilege. I've been publishing this argument since 1998, most recently as Patiency Is Not a Virtue: AI and the Design of Ethical Systems. (2016 AAAI Spring Symposium on Ethical and Moral Considerations in Nonhuman Agents.) See also my more recent blog posts:
Photo of iCub assembly from Giogio Metta, thanks Giorgio! |
We don't make children in the sense that we make robots. We do produce children from almost nothing, and they would die without our support. We educate and shape them. But we don't author them, as we do robots. We can't determine how many limbs they have, whether they have tentacles or wings or wheels, or what kind of light they can sense. Sure, both humans and robots are constrained by the laws of physics and computation. But within those limits, we can define everything about a robot, and next to nothing about a child. We can perfectly determine a robot's goals and desires, even what it is capable of knowing. This is why building and programming a robot is more like writing a novel or building a house than like having children.
A child is shaped a bit by their parents' will, and more by their parents' culture, but even more by their parents' phylogeny – their genetic, animal heritage. That's what makes children like us. They are designed by evolution to become and replace us. They will be our equivalents, or in fact on average, slightly superior to us, but mostly effectively the same. They will carry on most of our genes, culture and ideas with a few new innovations of their own. Because they are the same kinds of animals as us, they will share many of our feelings, and therefore understand our experience and value our cultural heritage.
We think robots are like us because they have language and reason, and we've spent the vast majority of our history thinking about how we are not like other animals. There has never been something else inhuman with language and reason before, but now there is. But artefacts are not animals. We have total control over their form and function in a way that is not possible for evolved life.
I was talking about this with the Guardian reporter Alex Hern this morning (for an upcoming podcast). He brought up the case where Jeffry van der Goot's Twitter bot randomly tweeted “I seriously want to kill people”. Hern said that van der Goot thought he could not be responsible for what a bot did randomly. Completely wrong. van der Goot is not responsible for intending to kill, but he is responsible for making a threat. Whether he did it himself or via a program he wrote is irrelevant, though of course the context in which it was expressed (was the twitter account clearly a joke or spoof?) is very relevant.
Similarly, this afternoon I was attending a board trying to create an ISO standard for robot ethics, and there was a discussion of whether humans trusting robots and robots trusting humans are symmetric cases, whether "trust" could be a single line in the policy. It rapidly emerged not. With respect to humans trusting robots, it was clear that we were recommending robots to be designed to be trustworthy, and therefore the issues we wished to address were when a robot's behaviour might be apparently inconsistent or for some other reason it might lose the trust it needs to function well, for example in a collaborative setting. It's not a problem when humans lose trust in a robot that's broken or misbehaving, that's as it should be. With respect to humans, we must in contrast assume that they may by their nature be sometimes unreliable, so the issue for robot engineers is detecting and handling this case. We don't need a robot to form a bond of trust to a human, we just need it to have or form a model of human behaviour. A model will not only let the robot better anticipate and collaborate with a human, it also allows the robot to notice when the human is doing something unexpected or unacceptable.
These asymmetries arise from robots being fully authored by humans. Human beings are in contrast both constrained and varied by nature. We are also the ultimate moral subjects – in fact, the entire concept of moral subject evolved and developed to preserve our societies.
I hope that our experience of robots helps us realise how much we are like other animals. By which I do not mean coming to terms with our bad or irrational side. Non-human animals get a bad rap; they are often cooperative and generous, as well as often being violent and abusive, just like us. Understanding our own nature, and nature more generally, might not only help us help ourselves, it might also help us come up with a better model of exactly how much resource we want to dedicate to artefacts participating in our culture.
Addendum: in October 2015, Lantz Flemming Miller published a formal spelling out of an argument very like the above, and in a human rights journal: Granting Automata Human Rights: Challenge to a Basis of Full-Rights Privilege. I've been publishing this argument since 1998, most recently as Patiency Is Not a Virtue: AI and the Design of Ethical Systems. (2016 AAAI Spring Symposium on Ethical and Moral Considerations in Nonhuman Agents.) See also my more recent blog posts:
- What Makes a Person? Five Reasons Not to Other AI (September 2016)
- If robots ever need rights we'll have designed them unjustly (January 2017)
Comments
Static, deterministic behavior has its place, but is that really the goal of AI? Hard-coded systems based on what we know at the time, rather than building a system that actually understands? Intelligent looking behavior over synthetic agency?
I read a lot of your posts last night, and this morning I am still wondering if the general field of AI has been damaged (or stunted) by a kind of research bias. It's AI from the perspective of "insert alternate field of research here" rather than an understanding of thinking machine for the sake of thinking machines. I imagine a sign on the door of many computer science rooms that reads: "What can AI do for you?"
It's a disconnect that has no part in any philosophical debate, and again, if it is the core principle of AI research, what other field better serves my understanding?