Robots are owned. Owners are taxed. Internet services cost Information.

As I often recount, I got involved in AI ethics because I was dumbfounded that people attributed moral patiency to (thought I shouldn't unplug) Cog, the humanoid robot, when in fact it wasn't plugged in, and didn't work (this was 1993-1994).  The processors of its brain couldn't talk to each other, they weren't properly grounded [earthed].  Just because it was shaped like a human and they'd watched Star Wars, passers by thought it deserved more ethical consideration than they gave homeless people, who were actually people.

Cela ne veut pas une personne.
This picture was taken before Cog's brain was earthed.

I started writing AI ethics papers way more frequently when I thought the administration of George W. Bush was trying to use Ron Arkin to do what Bush had done in Abu Ghraib – blame pawns for despicable policy.  Not that robots could ever be as culpable as privates in the military, but neither are responsible for creating the context of such vast abuse by power.

Now there's a new moral hazard caused by faulty moral reasoning and transhumanists desperate quest for power and immortality.  Here today when all my social medial icons are showing the EU flag to support the "Remain in the EU" campaign in the UK, the EU parliament has come up with headlines about  recognising robots as persons in order to tax them.

Because I'd been asked by both the UK government and media, and now in the context of Princeton's Center for Information Technology Policy, I've been doing research for over a year looking into whether automation causes economic inequality.  So far I am absolutely unconvinced that there's a significant relationship.  I think inequality comes down to fiscal policy.

Period of low income inequality 1940-1980

The last time we had this great of inequality (and its associated political polarization) was just before and after World War I.  AI hadn't even been developed then; Turing was just conceiving of computation and algorithms.  Maybe the horror of the wars, or the economic crash of 1929, or the threat of communism or the unions – something lead policy makers to create a context where inequality was kept low and wages kept pace with productivity.  Around 1978 that changed.  I suspect it's because policy makers lost their fear of the USSR, because the USSR's amazing economic recovery from Tsarist Russia finally plateaued around then, but I'm the only one saying that (so far.)   Anyway, something started allowing the elite to cash in again,  and here we are.

Note that all these figures about how wages no longer track productivity start in 1945.  That's because before 1945 they didn't.  1945-1978 was the exception, though one we should return to.
Period of wages tracking productivity: 1945-1978. Coincidence?

Blaming robots is insane, and taxing the robots themselves is insane.  This is insane because no robot comes spontaneously into being.  Robots are all constructed, and the ones that have impact on the economy are constructed by the rich.

I blame economists for not keeping up with what wealth even is.  You can't say that Google et al gives everything away for free.  They pay people for providing information, and that wage should be taxed.  Or better, the wealth created out of that labour should be taxed.  If economists can't keep up with where the wealth comes from, who cares?  We can see the wealth estimated in a corporation's worth.  Let's just tax that directly, and stop worrying about semantics.

The point of this blogpost is that declaring robots to be persons so you can tax them makes no sense at all.  There's no motivation for it, and it creates a moral hazard to dump responsibility into a pit that you cannot sue or punish.  Yet for some reason someone has put a draft plan about that through the European Parliament.  Hopefully we can head this off.

Acknowledgements: that brilliant idea to tax market cap came to me via Rob Calhoun. The realisation that inequality might be causing the even larger problem of political polarisation and social instability came from a talk and the work of Nolan McCarty. We are working on a paper concerning the causal relationship between these which will be presented at APSA in August. Update 6 December: the paper with McCarty still isn't finished, but here's the APSA poster we presented, Polarization and Inequality: Towards a Mechanistic Account.  For more on why robots aren't legal persons, see that 3 December blogpost.

Note: this blogpost is from June 2016.  For more recent posts on these topics, click a topic label. I particularly recommend Greater Equality Shouldn't Mean the End of Americans' Dreams. Also, we did succeed in heading off the legal personality thing, partly with this academic article: Of, for, and by the people: the legal lacuna of synthetic persons.


Great work! We are an expert Robot manufacturer in Kuwait. Our vision is to create global presence in power transmission by innovating and developing products of robots services (Restaurant Robot, Delivery Robots, Bank Robot, and Humanoid Robots) to enhance value and satisfaction of our customers.
Joanna Bryson said…
I've published your "comment" even though it is obviously an advertisement. The fact you have a (drawing of a no doubt fictitious) deeply humanoid and totally white male robot on your web page is very much at odds with your agreement with this post. Robots should not be anthropomorphised, it is a form of deception, and humans should not become desensitised to owning something that appears to be human, even if you could provide robots like the ones you picture.
Unknown said…
Hi, I'm Rafel a 17-year-old student from Spain! I landed on this page after I read Robot Rights by David J. Gunkel and I looked at the references. I'm researching robot rights so I can then write a philosophical "paper" about it (it's not really a paper, it's more of an investigation on the subject). I've been reading your publications (The one that I found most appealing was Robots should be slaves), and I became interested in what you have been writing. That's why I decided to do this research about how should we treat robots, and specifically if we should treat robots as slaves. To do so I'm looking for the pros and cons, different views, and statements from various authors. That's why I wanted to ask you if you could help me a bit by letting me know which are the major papers on this topic so I can then read them and make my conclusions. If not, don't worry XD, this message is also to let you know that you've changed the way I look at things and made me interested in this subject. Thanks a lot! Rafel
Joanna Bryson said…
Hi Raphael! You can find my blogposts about "robot rights" tagged with a label, the labels are over on the right, or also once you are in an article with labels you can find those labels at the bottom. Click the ones you are interested in. Or you can see my whole web page on AI ethics. I think the most useful paper is called Patiency is not a virtue: the design of intelligent systems and systems of ethics which makes the point that when we build an AI system, we get to choose how to build it. So there's no single right way to treat AI, rather there's right ways to build AI given how it is that we would be best off treating it.
Unknown said…
Hi Joanna! Thank you very much for the information! I read the entire article and I found it very interesting! However, I have a doubt about what you are proposing (in general, not in that article specifically). You defend that we shouldn't give robots rights because "We design, manufacture, own and operate robots". We decide how they are created and, as we decide this, we shouldn't give them feelings, or their own objectives, or intelligence... (And, if we give it to them, we determine how they acquire it. More or less I think your position goes that way) Because that "would be unhealthy and inefficient. More importantly, it invites inappropriate decisions such as misassignations of responsibility or misappropriations of resources”.

Ok, until here I agree and I find it pretty appropriate. However, I haven't found any paper where you answer how we should treat robots if they had real feelings, or if they were intelligent, and had their own goals... Whatever characteristic we choose as necessary for acquiring rights. I think you defend your position by saying that we shouldn't create this type of robot, and, if we do, they would continue to be our own creation as we determine how they acquired this type of feelings or knowledge. That they are not like humans, that it doesn't matter the education you give them, they can get depressed (even if you haven't taught them that).

But what if there were robots that jumped that gap and started being intelligent, or sentient, or determining their own goals... for themselves? If they started learning for themselves, as kids do, by observing their environment, if they had real feelings if they started creating other robots? How should we treat them? We would indeed have created the robot, and we would have told him how to learn new things. But what if he really became intelligent, and found new ways of learning, of feeling... I am not explaining myself very well... I'll put it this way although I know it's wrong to use this comparison. But what if they acted like a person? (And, again, I know it's wrong to use this term, but I wanted to write it because it was very difficult for me to get the message through) What would we have to do then?
Should we still have to treat them as slaves (or servants, the terminology is the least) because we would still have created them? Or should we give them rights? Like we give rights to adults when a kid grows old and starts learning and making decisions for itself? We also teach kids things, and they observe feelings around them, they get how they work, that's how they may learn how to get depressed. At the end of it, (and I know it has been widely debated) but if we follow some points of view, we are nothing more than cells. The chemical reactions that happen inside our brains determine how we act, the same way that the chips inside a robot do. They would also learn by watching, the same way that children do.

Having said all that, I thank you a lot if you have read all this. And I apologize if my English is not entirely correct. I'm doing my best XD. I also apologize if you have already answered this question, or this line of arguments (as I'm sure you have already done) but I couldn't find it. Once more, I appreciate what you are doing, as you are making many people interested in this debate, and philosophy in general. Philosophy doesn't have to be about old, long-dead debates. They can be as new and interesting as this one. Thank you very much for your time, and I hope to hear from you. Thanks a lot! Rafel

PS: (By the way, I would love to have your email so we could stay in touch) However, I understand if you wouldn't like to give it. I think you already have mine. Or at least I have published this comment with my mail.