Slightly darker full Observer "quote"

The Observer phoned me the night I appeared on Channel 4, and then asked if my quote was OK by email.  This is what I sent back to them.  It's a bit darker than what they've just printed under the title "Artificial intelligence: can scientists stop ‘negative’ outcomes?"  I was doing this in the back of a taxi, so I can see now that what I said was a bit unclear... [thus the interpolations]
What I don’t like is when people say artificial intelligence itself is going to take over. As humanity gets smarter and smarter, we do keep creating dangers. Like climate change, like the global extinctions of biodiversity, like nuclear weapons.  Nuclear weapons are the only thing we've made so far that could really wipe us out, which is why [physicists] criticising AI is ironic. Artificial intelligence is just us making ourselves smarter: it doesn't have to have goals or plans unless we build those in, and if we do then we're responsible, not [the AI].  But even if we don't, AI is [still] one of many tools we've developed that makes our changes come even faster.  So the question is: is it possible for us to keep regulating ourselves, including artificial intelligence, so that we don’t do serious damage? So far we’re doing pretty well at this. We are able to build safe systems, but we do sometimes make mistakes. The mistakes I'm worried about right now are loss of privacy, and [gross] income inequality.  Both of these could give very few people too much power over the rest of us.  We need to regulate the applications of AI and the use of our personal data. We need to build tools and systems that let ordinary people benefit from AI and make good choices, particularly when they vote.


sd marlow said…
I don't get the "making us smarter" argument. Smarter tools help us do more, but it's a bit like Forbidden Planet, and the Krell. Expanding our minds does not equate to being better or safer, it just exposes how we really are. And we are a predatory species.

As to the idea of machines taking over (in a bad way), it seems perfectly valid to suggest a natural outcome of open access to AI tools is governance by AI its self. That might be as simple as establishing a common legal framework for all states, or even removing the need for individual statehood, or it could mean federal government done at a regional level with no need for a central elected figure.

It's after that point we may calmly slip into the trope of AI managing us for our own good. Willing enslavement? If most Americans are content with day-to-day life, with no disruption in amenities, does it even matter if "the ruling party" has a heart beat?