Robots don't need rights right now.
A surprising number of people I meet on line self-identify as robots. Look, I totally get that some of us have more trouble relating to other people than others, and some of us have emotions that seem not like what we see other people as having on TV or in movies or in junior high locker-room talk. But that doesn't mean we aren't human.
Please stop putting pictures of robots with faces on ethics reports. |
No existing robot needs rights.
Animals don't have rights they have welfare.
Animals can suffer just like humans – the ways humans suffer all derive from our being animals. But rights entail responsibilities, and no animals but humans can understand the system of justice that allows them to seek redress. Animals all struggle for their wellbeing in their own way, and within human society our justice does designate that we should provide them with welfare. We define ways that we are obliged to relieve animal suffering, and we use our justice system to enforce that. But the animals are incapable themselves of participating in that justice system directly, which means they are not persons. Even some people lose their capacity to to defend themselves, which makes them not legal persons, but rather wards of others who are able to care for them. Some people have sought to make animals also wards, but so far no court has seen that as appropriate. Our ethics was constructed by our society, and human society is its core.
If robots were to need rights or welfare it would be our fault.
What makes something an artefact is that it was produced by humans. This means robots are more like novels than children – they are authored, not just raised. Because we are social animals and what we do affects others, when we produce artefacts we have responsibilities concerning how we design them. We can and do design AI such that it is easy to replace all its components, such that everything that is learned is backed up, such that no robot is unique or irreplaceable. If we are designing a robot as a piece of art we may not want to do that. But if we are designing a commercial product we are obliged to do that in a way that doesn't hurt humans. That implies we shouldn't set up robots to require access to our limited resources including our time, love, attention, courts, taxes. Of course, some people enjoy giving their time and affection to houseplants, toys, fictional characters and yes robots, and that's fine. But allowing people to act as they like is not the same as building unnecessary obligations into our economy.
Making robots legal persons would only allow corporations to hurt real people.
There is a long legal history that allows corporations to be treated as legal persons. The reason is that corporations are composed of people, so in theory not only know and can pursue their own rights, but also suffer from loss of time, social status, goods, money, power, etc. So in theory, it is simpler to extend our system of justice to corporations as if they are people than to invent a new one for them.
Unfortunately, some corporations have taken advantage of this and of bankruptcy law to get out of their very real obligations to humans. Some corporations and even rich individuals create "shell" corporations, designate them as the holders of some liability, and then allow the new shell organisation to go bankrupt. By the magic of bankruptcy, all the legal and financial liabilities disappear with the only cost being the reputation and standing of the shell corporation. A few people who probably knew they had temporary jobs lose those jobs, the shell corporation is shut down, and the only people who really suffer are those who lost their money because they had contracts with the shell corporation or were supposed to be paid by it or it was supposed to have paid taxes. Donald Trump is famous for doing this kind of thing.
Allowing corporations to automate part of their business process, call it an "electronic person", make it responsible for taxes and liability, is basically creating an empty shell organisation. There would literally be no one who suffers when the robot goes bankrupt except the taxpayers that thought they were going to somehow get money from the fact that human workers had been replaced by robots, and any person who was hurt by the robot's poor design let's say in a traffic accident and tries to sue it. Oh gosh, the robot is out of money too bad! I guess we should dissolve the robot since it's bankrupt.
Needless to say, allowing corporations to evade taxes increases wealth inequality. Some are already doing this with "free" Internet services. We need to get better at denominating the exchanges we are doing with contemporary transnational digital businesses.
Needless to say, allowing corporations to evade taxes increases wealth inequality. Some are already doing this with "free" Internet services. We need to get better at denominating the exchanges we are doing with contemporary transnational digital businesses.
Corporations that choose to automate parts of their business process should still be liable for the profits they make and the injuries they cause.
If you haven't guessed, this essay is about The European Parliament's Draft Report with recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL)). I was very concerned that the final report should remove the residual language about "electronic persons." The report is already seriously improved, and I hope the European Commission will be more sensible still when it drafts its legislation. Still, hope is not enough – now we need to contribute to the public consultation.
Update: 13 February 2017
The day after I first wrote this blog post, the the final European Parliamentary report on AI was released. The language about electronic persons that originally got me concerned has been both weakened and clarified. See the last line of page 17 and most of page 18 of the report, compare that to section 31 on page 12 of the original draft. Importantly, this report is not in itself legislation, it is a call for legislation. How the European Commission (who write legislation for the EU) chooses to interpret and address this recommendation will be known later in 2017, so work on this issue is ongoing. Right now, there is an open call for public consultation.
Update: 29 November 2019
More recent, more formal academic publications of the above arguments:
- Joanna J. Bryson, Mihailis E. Diamantis, and Thomas D. Grant (2017), Of, For, and By the People: The Legal Lacuna of Synthetic Persons. Artificial Intelligence and Law 25(3):273–291 [Sep 2017]. Two professors of law and I argue that it would be a terrible, terrible idea to make something strictly AI (in contrast to an organisation also containing humans) a legal person. In fact, the only good thing about this is that it gives us a chance to think about where legal personhood has already been overextended (we give examples).
- Joanna J. Bryson (2018), Patiency Is Not a Virtue: The Design of Intelligent Systems and Systems of Ethics, in Ethics and Information Technology 20(1):15-26. Both AI and Ethics are artefacts, so there is no necessary position for AI artefacts in society. Rather we need to decide what we should build and how we should treat what we build. So why build something to compete for the rights we already struggle to offer 8 billion people?
See also:
- Shorter blogpost with a 2x2 table about how I use the terms rights versus welfare above: Why Robots (and Animals) Never Need Rights
- More blunt blogpost linking to other blogposts with more info: Rights are a devastatingly bad way to protect robots
- Not that this is actually relevant, but some people think it is, an older formal work on AI consciousness: Joanna J. Bryson (2012) A Role for Consciousness in Action Selection in the International Journal of Machine Consciousness 4(2):471-482.
Comments