The Observer phoned me the night I appeared on Channel 4, and then asked if my quote was OK by email. This is what I sent back to them. It's a bit darker than what they've just printed under the title "Artificial intelligence: can scientists stop ‘negative’ outcomes?" I was doing this in the back of a taxi, so I can see now that what I said was a bit unclear... [thus the interpolations]
What I don’t like is when people say artificial intelligence itself is going to take over. As humanity gets smarter and smarter, we do keep creating dangers. Like climate change, like the global extinctions of biodiversity, like nuclear weapons. Nuclear weapons are the only thing we've made so far that could really wipe us out, which is why [physicists] criticising AI is ironic. Artificial intelligence is just us making ourselves smarter: it doesn't have to have goals or plans unless we build those in, and if we do then we're responsible, not [the AI]. But even if we don't, AI is [still] one of many tools we've developed that makes our changes come even faster. So the question is: is it possible for us to keep regulating ourselves, including artificial intelligence, so that we don’t do serious damage? So far we’re doing pretty well at this. We are able to build safe systems, but we do sometimes make mistakes. The mistakes I'm worried about right now are loss of privacy, and [gross] income inequality. Both of these could give very few people too much power over the rest of us. We need to regulate the applications of AI and the use of our personal data. We need to build tools and systems that let ordinary people benefit from AI and make good choices, particularly when they vote.