I've been arguing for some months now in public talks that AI cannot be a legal person because suffering in well-designed AI is incoherent. This is not actually my own argument, but rather is due to S. M. Solaiman from their brilliant recent article Legal personality of robots, corporations, idols and chimpanzees: a quest for legitimacy.
The great thing about Solaiman's article is that they make it clear why corporations are legal persons but AI and chimpanzees aren't. Basically, the notion of legal person has been developed to synchronise with our system of justice. Justice among other things requires means of redress and coercion. A legal person most know and be able to claim their rights–they must be able to assert themselves as members of a society. This is why non-human animals (and some incapacitated humans) are not legal persons. I'm happy with definitions of "know" and "assert" that would mean that intelligent artefacts could do this, and indeed organisations of humans can meet these criteria. However, legal persons must also like any person care about the kinds of sanctions that justice can use against them, so that justice can bind society together. Justice is part of the way a society composes itself of individuals, it is part of the glue that turns a set of individuals into a group. To date this has not involved direct "joystick" manipulation, but rather sanctions that individuals find aversive, such as loss of time and/or social status. Non-human animals can definitely suffer, as I said, the only thing that makes them not legal persons is their incapacity to understand and argue their rights. But can AI suffer? Solaiman concludes "not yet", but I would like to go further.
Pain, suffering, and concern for social status are things essential to a social species, and as such they are integral to our intelligence. I've read it argued that one of the characteristics of a sociopath is missing this part of what it is to be human, but I'm not an expert on clinical psychology so don't know whether this is ever truly possible even in these extreme cases. But safe, well architected (designed) AI tends to be modular. I and my students build systems of emotions for AI, and I'm happy to have some variable that represents an emotion on which other behaviour depends. I don't have an issue with saying a robot is excited or depressed if its action expression has been increased or inhibited, respectively.
But I have an issue with saying that a well-designed robot is suffering, because suffering is defined to be something sufficiently aversive that you would avoid it. But anything we insert into a well designed AI we (or conceivably it) could extract and isolate. This isn't the nature of suffering.
I am not saying I cannot conceive of AI that could suffer. This may not be likely or even possible, but if we did actually construct AI by scanning in a human or animal (really, it would have to be the whole thing, not just the brain – brain cells reference muscle cells and such like) then no doubt it could suffer. But that would not be well-designed AI; rather, that would be a sort of clone. Owning human clones strikes me as deeply unethical – exactly as unethical as owning humans. Building synthetic clones of animals also more generally strikes me as deeply inefficient; we would be better off working with the animals we have rather than constructing clones of them from materials that aren't as well suited to purpose as their original biology.
So I find it extremely unlikely we will ever have AI suffering, but even if we do, what I recommend (in keeping with the British EPSRC Principles of Robotics) is that it should never be a legal product for manufacture and purchase. And therefore any AI which would ever be bought or sold should not be considered a legal person.
Note that there are also other reasons not to make AI a legal person, most importantly that taxing robots allows the corporations who actually decide to use them rather than humans off the hook for that decision, displaces those corporations' liability, and also afford the opportunity to hack up robots to minimise the amount of tax that will be paid. Robots and AI are not human; they do not come in discrete pre-defined units. They are artefacts, and as such, our own, authored responsibility.
To get back to corporations, Solaiman says these can suffer to the extent that the humans in them suffer, and/or to the extent that losing their assets is equivalent to human suffering. The latter argument is weaker I think, but it has a lot of historic legal precedent. Nevertheless, the fact that corporations don't really suffer may well be why we have a number of problems with corporations being treated as legal persons. Confounding these problems by declaring AI or robots to be legal persons would almost certainly not be wise.
update, March 2017: If you're wondering how a river can be a legal person given the above, Solaiman explained that to, in the part of their paper that's about idols. Basically, an idol can also be damaged, and the humans that worship it suffer when it is. So not just any river is a legal person, only a river effectively declared an idol.
From Gunshow by KC Green |
Pain, suffering, and concern for social status are things essential to a social species, and as such they are integral to our intelligence. I've read it argued that one of the characteristics of a sociopath is missing this part of what it is to be human, but I'm not an expert on clinical psychology so don't know whether this is ever truly possible even in these extreme cases. But safe, well architected (designed) AI tends to be modular. I and my students build systems of emotions for AI, and I'm happy to have some variable that represents an emotion on which other behaviour depends. I don't have an issue with saying a robot is excited or depressed if its action expression has been increased or inhibited, respectively.
But I have an issue with saying that a well-designed robot is suffering, because suffering is defined to be something sufficiently aversive that you would avoid it. But anything we insert into a well designed AI we (or conceivably it) could extract and isolate. This isn't the nature of suffering.
I am not saying I cannot conceive of AI that could suffer. This may not be likely or even possible, but if we did actually construct AI by scanning in a human or animal (really, it would have to be the whole thing, not just the brain – brain cells reference muscle cells and such like) then no doubt it could suffer. But that would not be well-designed AI; rather, that would be a sort of clone. Owning human clones strikes me as deeply unethical – exactly as unethical as owning humans. Building synthetic clones of animals also more generally strikes me as deeply inefficient; we would be better off working with the animals we have rather than constructing clones of them from materials that aren't as well suited to purpose as their original biology.
So I find it extremely unlikely we will ever have AI suffering, but even if we do, what I recommend (in keeping with the British EPSRC Principles of Robotics) is that it should never be a legal product for manufacture and purchase. And therefore any AI which would ever be bought or sold should not be considered a legal person.
Note that there are also other reasons not to make AI a legal person, most importantly that taxing robots allows the corporations who actually decide to use them rather than humans off the hook for that decision, displaces those corporations' liability, and also afford the opportunity to hack up robots to minimise the amount of tax that will be paid. Robots and AI are not human; they do not come in discrete pre-defined units. They are artefacts, and as such, our own, authored responsibility.
To get back to corporations, Solaiman says these can suffer to the extent that the humans in them suffer, and/or to the extent that losing their assets is equivalent to human suffering. The latter argument is weaker I think, but it has a lot of historic legal precedent. Nevertheless, the fact that corporations don't really suffer may well be why we have a number of problems with corporations being treated as legal persons. Confounding these problems by declaring AI or robots to be legal persons would almost certainly not be wise.
update, March 2017: If you're wondering how a river can be a legal person given the above, Solaiman explained that to, in the part of their paper that's about idols. Basically, an idol can also be damaged, and the humans that worship it suffer when it is. So not just any river is a legal person, only a river effectively declared an idol.
update, Oct 2020: Later in March 2017, in response to the EU pondering making robots legal persons, two law colleagues and I wrote Of, for, and by the people: the legal lacuna of synthetic persons.
Comments