IJCAI 2013 Panel: The Future of AI: What If We Succeed?

The panelists before the panel started.  Photo: Sophie Tourret

Stuart Russell put together and chaired a panel at IJCAI in Beijing earlier this month.  I would call the topic of the panel AI ethics, but Stuart resisted that.  It was one of many conversations I had in the week which has altered my thinking about AI & robot ethics (which I've been writing about since 1996, due to my experiences with the humanoid robot Cog).  The panel was also one week to the day (but not the hour) after a fantastic full-day meeting Tony Prescott organised in London on Societal Impacts of Living Machines.

There should eventually be a page on the IJCAI website about the panel, but it's been taking some time so I decided to go ahead and upload the talks here, since they were run off my laptop anyway and all the speakers said they were happy to have them on the Internet.  In the order of speaking then, here are the talks (comments introducing authors are from Stuart's slides):
  • Stuart Russell, Berkeley (chair) slides
  • Henry Kautz, U. of Rochester (President of AAAI, 2010-12) slides
  • Joanna Bryson, U. of Bath (Co-author, EPSRC Principles of Robotics) slides
  • Anders Sandberg, U. of Oxford (Fellow, Future of Humanity Institute) slides
  • Sebastian Thrun, Udacity/Google/Stanford (developer of Google driverless car, etc.) slides
The panel was 90 minutes long and each set of slides was at most 10 minutes, so we had about 45 minutes of real discussion, which I remember as being intense but cordial.  Unfortunately, it's exactly because I'm already starting to forget the details that I decided to go ahead and blog now rather than wait for IJCAI to get the slides up.  I remember we argued about whether humanity really does ever face existential threats. People often talk in apocalyptic terms (cf. the paranoid style in US politics), and they certainly do this more often than mass destruction happens, but on the other hand entire cultures, races, languages and species have been wiped out.  I remember one of the optimists on the panel saying "Yes, but only some people, a group, not everyone," to which I answered "Groups are getting bigger." I thought it was an odd thing to have to argue to–sufficiently odd I'm not identifying which panellist said that, though he's free to out himself.

I also remember a question from the audience about what we were specifically recommending.  I believe Stuart was recommending we think hard and get our act together as the people working on biological weapons had (under Nixon! though the Wikipedia page doesn't currently document the role of the scientists in that petitioning for that decision) and do better than the physicists did.  I summarised my own main recommendation like this: that we shouldn't make personified AI sensitive to social status as all evolved social animals must be.  AI exists now, but most of it is not feigning personal agency. However in applications where it does, the AI / robot should neither suffer from being subordinate nor desire to be superordinate.  I'm not sure the other three had a specific recommendation, except that Sebastian perhaps wanted us to not be afraid – he was extremely optimistic.

Several of us wondered what kind of turn out we'd get, particularly since it was Friday afternoon and there were no more contributed talks after our panel – maybe everyone would have gone to see Beijing.  But in fact the hall was entirely full, fuller than for many plenaries (including the ones after our panel).  Clearly, the attendees of IJCAI, although largely being on the engineering end of AI, really care about this topic.

Addendum (5 September):  I found the notes for a blog posting on Science vs the Humanities that I'd been sitting on, which reminded me that another topic discussed in Q&A was the difference between is & ought, and the importance of choosing futures and the separability of that (or not) from doing science and engineering.

Comments

IT said…
This comment has been removed by a blog administrator.