Note: this post is from 2013. I think I write better blog posts now :-) But I'm thinking of writing a book with a similar title.
I just watched Daniel Dewey's TEDx Vienna talk, The Long-Term Future of AI, and it drove me crazy. On the one hand I agree with him about the description of the phenomenon that he and others call The Intelligence Explosion. But on the other, it was constantly grating to hear him describe it as something coming in the future. This is a perfect description of human cumulative culture, and how it differs from the many other species' culture, and why it is dangerous. And there is no question that human culture is both wonderful and dangerous – we kill millions of people, extinguish other species and our own languages with our weapons, pollution and just sheer competence to expand winning strategies.
You could attribute this all to AI if you like. I've realised lately that the essential problem I have with arguing with people about AI ethics is that they confound intelligence with sentience. Of course this is a matter of semantics, but I much prefer to think of intelligence as any form of plastic, adaptive capacity to change behaviour in response to perceived changes in the environment. This means even plants are intelligent. AI already exists, even though it just plans, sorts, or searches without motivation other than that provided by its programmer. And if you are willing to accept that, then you might accept that the first AI, which triggered the Intelligence Explosion, was writing. Writing provided out-of-mind memory, allowing humans to safely become more innovative because their old ideas wouldn't be lost forever if their present ideas got corrupted.
Why do I want to adopt such a weird definition of intelligence? Partly to avoid redundancy – "sentience" means sentience, why should anything else? But more importantly, to communicate why AI isn't going to take over the world by itself. If anyone takes over the world with AI, it will be people. People are the only moral / responsible agents in our culture, they are the ones' whose behaviour we should be working to control. Waiting around to declare some machine sentient and then worry is a bad plan.
And partly to point out that the threats of AI are not in the future. They are in the present. The world has changed since we lost privacy by anonymity. We need privacy by legislation, or we will lose our democracy and our civil society. The instability of the financial system, our capacity to build nuclear weapons, these all come as part of our exploitation of computer-based intelligence.
But this is not to say we need to start panicking. Another thing Dewey misses (and I do like him, Sorry Dan :-) is that there's been some very good work on how to handle errors in AI. Originally, in the 1950s, AI researchers thought that machines, being unemotional, could make perfect plans and do everything optimally so bugs could be banished. But by the 1980s we understood that some problems are just too hard to solve (we computer scientists call this "computationally intractable"), and there are good reasons that evolved, natural intelligence takes all the short cuts that it does, including emotions. One of the things the brain does is recognise and attend to errors after they are produced. Erann Gat (now Ron Garret) in his brilliant PhD dissertation showed how cognizant failures could be used in reactive (dynamic, new, cognitive, pick your adjective) AI for autonomous systems.
I still think cognizant failure is a critical component of any truly autonomous system, whether it's a robot or our society. The question I'm personally most agitated over is why it so hard for society to become cognizant of its dangerous failures and to unify sufficient support behind correcting them. Like, for example, our current problem with privacy. But we have done a pretty good job in the past, for example in damping the threat of nuclear and chemical weapons, so hopefully we'll continue to get on top of this. But AI as some weird autonomous, sentient thing to worry about in our future isn't really, in my opinion, the most helpful set of concepts to promote.
This post got turned into a talk: Containing the intelligence explosion: the role of transparency. Slides & video are both available there.
Comments
The future is difficult to predict and becoming increasingly more so, as we approach "singularity" - i.e. a point of inflexion where AI growth becomes exponentially visible - most humans think linearly.
One consideration you need to address in the above arguments is: in what form will human / sentient being exist?
e.g. will humans evolved to have all android bodies (as I believe esp if we want to colonize space; will they exist in software with "robots" doing our work = exploring our environment / universe & providing us with energy (think matrix;
will we evolve into 2 species (think time machine with a twist) - organic but genetically enhanced vs synthetic (my preferred prediction at present)
what are the repercussions of 2 intelligent humanoid species with vastly different capabilities?