As I often recount, I got involved in AI ethics because I was dumbfounded that people attributed moral patiency to (thought I shouldn't unplug) Cog, the humanoid robot, when in fact it wasn't plugged in, and didn't work (this was 1993-1994). The processors of its brain couldn't talk to each other, they weren't properly grounded [earthed]. Just because it was shaped like a human and they'd watched Star Wars, passers by thought it deserved more ethical consideration than they gave homeless people, who were actually people.
I started writing AI ethics papers way more frequently when I thought the administration of George W. Bush was trying to use Ron Arkin to do what Bush had done in Abu Ghraib – blame pawns for despicable policy. Not that robots could ever be as culpable as privates in the military, but neither are responsible for creating the context of such vast abuse by power.
Now there's a new moral hazard caused by faulty moral reasoning and transhumanists desperate quest for power and immortality. Here today when all my social medial icons are showing the EU flag to support the "Remain in the EU" campaign in the UK, the EU parliament has come up with headlines about recognising robots as persons in order to tax them.
Because I'd been asked by both the UK government and media, and now in the context of Princeton's Center for Information Technology Policy, I've been doing research for over a year looking into whether automation causes economic inequality. So far I am absolutely unconvinced that there's a significant relationship. I think inequality comes down to fiscal policy.
The last time we had this great of inequality (and its associated political polarization) was just before and after World War I. AI hadn't even been developed then; Turing was just conceiving of computation and algorithms. Maybe the horror of the wars, or the economic crash of 1929, or the threat of communism or the unions – something lead policy makers to create a context where inequality was kept low and wages kept pace with productivity. Around 1978 that changed. I suspect it's because policy makers lost their fear of the USSR, because the USSR's amazing economic recovery from Tsarist Russia finally plateaued around then, but I'm the only one saying that (so far.) Anyway, something started allowing the elite to cash in again, and here we are.
Note that all these figures about how wages no longer track productivity start in 1945. That's because before 1945 they didn't. 1945-1978 was the exception, though one we should return to.
Blaming robots is insane, and taxing the robots themselves is insane. This is insane because no robot comes spontaneously into being. Robots are all constructed, and the ones that have impact on the economy are constructed by the rich.
I blame economists for not keeping up with what wealth even is. You can't say that Google et al gives everything away for free. They pay people for providing information, and that wage should be taxed. Or better, the wealth created out of that labour should be taxed. If economists can't keep up with where the wealth comes from, who cares? We can see the wealth estimated in a corporation's worth. Let's just tax that directly, and stop worrying about semantics.
The point of this blogpost is that declaring robots to be persons so you can tax them makes no sense at all. There's no motivation for it, and it creates a moral hazard to dump responsibility into a pit that you cannot sue or punish. Yet for some reason someone has put a draft plan about that through the European Parliament. Hopefully we can head this off.
Acknowledgements: that brilliant idea to tax market cap came to me via Rob Calhoun. The realisation that inequality might be causing the even larger problem of political polarisation and social instability came from a talk and the work of Nolan McCarty. We are working on a paper concerning the causal relationship between these which will be presented at APSA in August. Update 6 December: the paper with McCarty still isn't finished, but here's the APSA poster we presented, Polarization and Inequality: Towards a Mechanistic Account. For more on why robots aren't legal persons, see that 3 December blogpost.
Note: this blogpost is from June 2016. For more recent posts on these topics, click a topic label. I particularly recommend Greater Equality Shouldn't Mean the End of Americans' Dreams. Also, we did succeed in heading off the legal personality thing, partly with this academic article: Of, for, and by the people: the legal lacuna of synthetic persons.
Cela ne veut pas une personne. This picture was taken before Cog's brain was earthed. |
I started writing AI ethics papers way more frequently when I thought the administration of George W. Bush was trying to use Ron Arkin to do what Bush had done in Abu Ghraib – blame pawns for despicable policy. Not that robots could ever be as culpable as privates in the military, but neither are responsible for creating the context of such vast abuse by power.
Now there's a new moral hazard caused by faulty moral reasoning and transhumanists desperate quest for power and immortality. Here today when all my social medial icons are showing the EU flag to support the "Remain in the EU" campaign in the UK, the EU parliament has come up with headlines about recognising robots as persons in order to tax them.
Because I'd been asked by both the UK government and media, and now in the context of Princeton's Center for Information Technology Policy, I've been doing research for over a year looking into whether automation causes economic inequality. So far I am absolutely unconvinced that there's a significant relationship. I think inequality comes down to fiscal policy.
Period of low income inequality 1940-1980 |
The last time we had this great of inequality (and its associated political polarization) was just before and after World War I. AI hadn't even been developed then; Turing was just conceiving of computation and algorithms. Maybe the horror of the wars, or the economic crash of 1929, or the threat of communism or the unions – something lead policy makers to create a context where inequality was kept low and wages kept pace with productivity. Around 1978 that changed. I suspect it's because policy makers lost their fear of the USSR, because the USSR's amazing economic recovery from Tsarist Russia finally plateaued around then, but I'm the only one saying that (so far.) Anyway, something started allowing the elite to cash in again, and here we are.
Note that all these figures about how wages no longer track productivity start in 1945. That's because before 1945 they didn't. 1945-1978 was the exception, though one we should return to.
Period of wages tracking productivity: 1945-1978. Coincidence? |
Blaming robots is insane, and taxing the robots themselves is insane. This is insane because no robot comes spontaneously into being. Robots are all constructed, and the ones that have impact on the economy are constructed by the rich.
I blame economists for not keeping up with what wealth even is. You can't say that Google et al gives everything away for free. They pay people for providing information, and that wage should be taxed. Or better, the wealth created out of that labour should be taxed. If economists can't keep up with where the wealth comes from, who cares? We can see the wealth estimated in a corporation's worth. Let's just tax that directly, and stop worrying about semantics.
The point of this blogpost is that declaring robots to be persons so you can tax them makes no sense at all. There's no motivation for it, and it creates a moral hazard to dump responsibility into a pit that you cannot sue or punish. Yet for some reason someone has put a draft plan about that through the European Parliament. Hopefully we can head this off.
Acknowledgements: that brilliant idea to tax market cap came to me via Rob Calhoun. The realisation that inequality might be causing the even larger problem of political polarisation and social instability came from a talk and the work of Nolan McCarty. We are working on a paper concerning the causal relationship between these which will be presented at APSA in August. Update 6 December: the paper with McCarty still isn't finished, but here's the APSA poster we presented, Polarization and Inequality: Towards a Mechanistic Account. For more on why robots aren't legal persons, see that 3 December blogpost.
Note: this blogpost is from June 2016. For more recent posts on these topics, click a topic label. I particularly recommend Greater Equality Shouldn't Mean the End of Americans' Dreams. Also, we did succeed in heading off the legal personality thing, partly with this academic article: Of, for, and by the people: the legal lacuna of synthetic persons.
Comments
Ok, until here I agree and I find it pretty appropriate. However, I haven't found any paper where you answer how we should treat robots if they had real feelings, or if they were intelligent, and had their own goals... Whatever characteristic we choose as necessary for acquiring rights. I think you defend your position by saying that we shouldn't create this type of robot, and, if we do, they would continue to be our own creation as we determine how they acquired this type of feelings or knowledge. That they are not like humans, that it doesn't matter the education you give them, they can get depressed (even if you haven't taught them that).
But what if there were robots that jumped that gap and started being intelligent, or sentient, or determining their own goals... for themselves? If they started learning for themselves, as kids do, by observing their environment, if they had real feelings if they started creating other robots? How should we treat them? We would indeed have created the robot, and we would have told him how to learn new things. But what if he really became intelligent, and found new ways of learning, of feeling... I am not explaining myself very well... I'll put it this way although I know it's wrong to use this comparison. But what if they acted like a person? (And, again, I know it's wrong to use this term, but I wanted to write it because it was very difficult for me to get the message through) What would we have to do then?
Should we still have to treat them as slaves (or servants, the terminology is the least) because we would still have created them? Or should we give them rights? Like we give rights to adults when a kid grows old and starts learning and making decisions for itself? We also teach kids things, and they observe feelings around them, they get how they work, that's how they may learn how to get depressed. At the end of it, (and I know it has been widely debated) but if we follow some points of view, we are nothing more than cells. The chemical reactions that happen inside our brains determine how we act, the same way that the chips inside a robot do. They would also learn by watching, the same way that children do.
Having said all that, I thank you a lot if you have read all this. And I apologize if my English is not entirely correct. I'm doing my best XD. I also apologize if you have already answered this question, or this line of arguments (as I'm sure you have already done) but I couldn't find it. Once more, I appreciate what you are doing, as you are making many people interested in this debate, and philosophy in general. Philosophy doesn't have to be about old, long-dead debates. They can be as new and interesting as this one. Thank you very much for your time, and I hope to hear from you. Thanks a lot! Rafel
PS: (By the way, I would love to have your email so we could stay in touch) However, I understand if you wouldn't like to give it. I think you already have mine. Or at least I have published this comment with my mail.