No one should trust AI – AND – Presenting robots as people stops us thinking clearly about AI

My office at Bath. Robots via AXA, Photo via Rachel Sheer.
I haven't been blogging here much this year because I was trying to get a book written in 2018 (new target for the first draft is July 2019 :-/ ) and then the last three months of the year I've been travelling. But also because my writing's been in demand so when I do manage to write things they are going other places.



These two are probably particularly worth catching.  For the United Nations (UN) University Centre for Policy Research, I wrote No one should trust AI. The text in that link should be visible and accessible to everyone, so please click it.

And for New Scientist (a British magazine) I wrote Presenting robots as people stops us thinking clearly about AI. If you don't have access to that (I do when I VPN through my university) you might want to read the author's final version (below), though it's not nearly as good or as tight.  That's part of why we pay for publishing – editors can be awesome.


Why robots don't testify

When Saudi Arabia pretended to give citizenship to a robot, obviously it was a joke. No one can be a citizen in Saudi Arabia unless they are Muslim. Yet many people believed and still do believe that a robot is now a citizen, as if it makes sense to have citizens you can mass produce, buy and sell. I first began researching AI ethics in 1993 after repeatedly experiencing PhD students from MIT and Harvard passing by what was essentially a statue made of motors, vaguely shaped like a human, and saying “it would be unethical to unplug that.” The robot was under construction, it didn’t work at all until some years later, its “brain” proved to have been improperly earthed during the year I was on it. Other, functional robots shaped like insects lay around but attracted no moral attention.
People desperately want robots and AI to be people. Why? Probably because we still identify most strongly as logical, rational, language users, and still are not one with the fact we are also apes. All of our values and motivations derive from the problems a group of apes have staying alive, both as individuals and coordinating their action to defend against other organisms both far larger and far smaller than them. We’ve wiped out nearly all the predators and done a pretty good job on disease too, now our primary problems come to sharing space and resources with other humans, and maximising the amount of resources our planet provides us with, which means worrying about sustainability and the climate.
But this confusion alone doesn’t explain the nihilistic generosity of wanting to treat robots as our children or ourselves. Many people confuse computation with mathematics, and don’t realise that computers also break down and “die” far faster than humans, that computation requires time, space, and energy. Robots are not our eternal super-powered offspring. They are extensions of the individuals or corporations that built them, deliberately designed, uploading our data to mysterious datafarms vulnerable to cyberattack.
It’s no longer a cute trick to pretend that a robot can testify in front of parliament, or to have a human tweeting for a robot on another planet in the first person. For as long as we live in a democracy, the more people who confuse robots with an alien, discovered species that is surprisingly human like, the easier it will be for us to be manipulated into situations where corporations can limit their legal and tax liability just by fully automating their business processes. That means, we would be encouraging corporations to fire all their human employees by reducing the amount of money they pay to support our infrastructure and the wellbeing of the humans in our society.
Robots aren’t people; they are the ultimate shell company.  They cannot be dissuaded by human justice, because they haven’t evolved like we have to hate isolation or the loss of social status. They cannot be taxed as people, because they are not countable like people. Trust me, I’m a programmer. For any legislation about how AI or robots will be taxed, we will design and build AI that pays very little. To maintain economic and legal coherence, our society needs to acknowledge that there is a fundamental difference when something is an artefact, that it is designed and built and therefore someone’s responsibility. AI is actually easier to trace accountability through than humans are, because we can if we want log every step of the engineering of the system, everything the robot perceives, what decisions a robot takes and why. And we have done this already for driverless cars, because the automotive industry is well regulated. This is why we know exactly what happened for every case of death involving a driverless car so far. We can easily extend this sort of accountability into every other area of intelligent artefact as well. And we should.

Comments