Chess wasn't killed by AI: The gestalt of being human

The BBC World Service phoned me up this morning to be on a discussion on their show "World Have Your Say" on the topic of Google's AlphaGo victory and what that means for AI vs Humans.  I was on the show a couple hours later with Anil Seth, Nick Bostrom, someone who actually knew Go, and others.  The audio is here:  http://www.bbc.co.uk/programmes/p03l73cr.

A few highlights and critical points:
  • AI hasn't killed chess, and if anything would (doubtful) it might be it might be the increasing interest in go that AI has generated at least in the West.  Similarly, we didn't stop doing math when we got calculators.  It doesn't matter whether machines can do things better than we can, we often want or need to do them anyway.
  • AI is not killing employment either.  Everyone was in a panic two years ago, but now both US & UK unemployment rates are low.  Employment tends to track the economy, not technology.
  • What AI does change is who can do what.  Google is now the richest company in the world; it's passed Apple, which actually also uses a lot of AI.
Farming is a big deal but…
Look, very few of our jobs are about making sure we get enough food, shelter or security.  We employ people when we have money, and we will keep doing that because we like to work and to have people work for us.  We enjoy the social and competitive aspects of employment.

And there are no single magic parts of being human.  It's not emotions or creativity or wisdom or morality or dexterity.  We can study any of those scientifically, and then we can construct models of them.  We aren't just getting games now because they're easier, we've also been working on them longer, but robotics is coming along too and these other things could as well if they were worth the investment and effort.

But even if every aspect of being a human is something we could model in AI, that doesn't mean that AI would itself necessarily become human.  Putting all the specific things that it is to be human into one machine and balancing that machine's motivations to be as apelike as ours would be a titanic, ultimately useless, and probably unethical, job. That effort would be better spent answering big questions like addressing income inequality, political polarisation, climate change, migration and integration, things like that.

Making machines that expand our talents is no doubt useful.  But no one talent produces the gestalt that makes humans moral subjects.

update:  For more on how inequality, not AI, is the real job killer see stuff on the inequality label, particularly Greater Equality Shouldn't Mean the End of Americans' Dreams.
0