Is AI a risk? Reply to Stephen Hawking

My university has asked me to reply to the BBC reports that Stephen Hawking has said that "The development of full artificial intelligence could spell the end of the human race."  We've all had experience of being quoted out of context, so of course I'd rather see what Prof. Hawking has said in his own full statement.  But I'm happy to address what he said in this video, which I assume he approved.

It's true that using AI is a risk.  AI is by it's nature applied in contexts where certain answers are not known, so we have to make best guesses.  AI is the way we use machines to help improve our guesses.

Could a technology using AI end the human race?  Possibly, but it's very unlikely.  Humanity and life in general is fantastically good at surviving great challenges.  And AI is built by people for purposes, and we can regulate the activities of people to try to keep them from being damaging.  Of course, sometimes we make a mistake, and many people die because we couldn't stop someone or something from killing them.  But overall there are more people alive now than ever before, so overall we must be doing well.

Still, why should we build a technology that has even the remotest possibility of ending the human race?  Because there are many possible things that might end the human race (Notably, as Prof. Mark Riedl points out, something built by physicists).  Most of them aren't very likely either.  But as I said earlier, AI is our way of using machines to make ourselves smarter.  So AI has a bigger chance of saving us than of wiping us out, if it helps us understand how to make ourselves safer, for example by governing ourselves better. 

If you need to take someone to the hospital right now to save their life, and you have a car, would you drive it?  Driving a car always carries a risk of dying in a car accident.  But every day many people take that risk not only to save someone else's life by taking them to the hospital, but for completely mundane things like getting a shirt of a different colour.

I think what's worst about the BBC article is the title: "Stephen Hawking warns artificial intelligence could end mankind."  AI could end mankind in the same way that asphyxia could end mankind -- not as an intentional actor, but as a side effect of something else, something generated by our own culture.  The tools that AI offers might save us time, trouble, might even save our lives, but could also be powerful weapons for predicting and controlling our behaviour.  What we need to do as a culture is worry about who and what we want to be, how we want to regulate governments and companies, whether we want our parents or children to know our every move.  If our employers own our email, they could mine the information there to know quite a lot about our personal habits and psychological inclinations, as well as how we conduct our jobs.  Scaring people off of the technology that makes us smarter is the wrong way to address the problems that are already here today.

Calculators don't take over your desk.
Phones don't take over your pocket.
Cats take over but they don't write songs.
(photo: Becca Bird.)
I've said several times before that just because something is intelligent doesn't mean it's motivated like a human.  Calculators don't take over your desk.  Cat's don't write songs.  But the steps and protections we are taking now to stop AI-augmented humans from blowing up enough nuclear bombs to kill mammalian life on this planet would also stop a wandering or even intentioned AI from doing the same thing.  Tens of thousands of people are employed in making complex systems safer.  Many of us study systems engineering, including intelligent systems engineering.  So while I think the ways that AI is augmenting our society need to be studied, understood and regulated, it's not because I think they might destroy humanity or the world.  It's because we are already changing the world, including society.  We were before AI, and we are–faster–now. We need to notice.  But not to stop.


For an hour talk from me on this subject at Nick Bostrom's Future of Humanity Institute (Oxford), plus a 30 minute discussion with his colleagues after the talk, see also the youtube link under "Containing the intelligence explosion: the role of transparency" by Dr Joanna Bryson

Comments

sd marlow said…
The kinds of people that read (or even write) a news headline like that without knowing how the hamburger is made are also the kinds of folks that are complacent about information gathering and blase about targeted messaging.