We often hear that AI is the new electricity and data the new oil. The former at least is kind of true. AI
is already pervasive in all of our digital artefacts and interactions. And to be fair, so is the latter. Data is dangerous to store, prone to leaking, and might lead to regulatory capture and political instability.
But I want to focus on a different analogy today, because I'm tired of people thinking that the problems of AI regulation can be solved by "AI 4 Good" projects. Look, of course it's good to be good. And since AI is now a pervasive technology, of course you can use AI to help you be good. And of course we frequently use regulation to improve our societies. But...
1. No amount of "good" applications counterweights "bad" applications.
Let's pretend for this first point that electrical energy from nuclear power plants is entirely clean, with no bad outcomes, no nuclear waste, etc. (foreshadowing: I won't make this assumption in the second point.) No matter how much clean, planet-saving electricity we could get from a world full of fully safe nuclear power plants, we would still need to regulate nuclear weapons and do everything we can to avoid nuclear war. Similarly, all the good things we can and will do using AI doesn't mean we don't have to worry about regulatory capture by technology companies, loss of privacy or freedom of thought, the elimination of the capacity to build viable political opposition parties, and so forth.
2. Even "good" applications create hazardous byproducts.
The USA helpfully training their troops by giving them direct experiences of atomic blasts. (click pic for Wikipedia source & linked articles) |
Similarly, I don't know whether all the entertainment, education, security, and economic benefits of AI can outweigh the dangers of having the capacity to identify who is going to vote how, who is most likely to join a military or a militia or create an opposition party or blow a whistle. I'm pretty sure though that these are problems we need to guard against anyway – a lot of regimes just throw all opposition (and often all academics) into jail, or kill them. They haven't historically needed AI for that.
So this is why I'm Professor of Ethics and Technology at the Hertie School now, to work on ensuring that the world we live in has the capacity to prevent bad applications of AI and heads off its unintentional bad outcomes, as well as for other (particularly digital) technologies. It's great that people want to do good things with AI – in fact, I personally use AI to build models to try to head off bad outcomes. But limiting, mitigating, and hopefully eliminating bad outcomes absolutely must be an essential component of AI regulation and governance. No amount of other good projects can outweigh that.
Update, October 2023 I now have an article on this: Going Nuclear? Precedents and Options for the Transnational Governance of Artificial Intelligence (with David Backovsky, for Horizons). I also wrote a whole blogpost on global AI governance.
Update, 28 April 2021 There has been a somewhat predictable debate in the comments that we hashed out on twitter. Both solar and wind power are now insanely cheap and the technology is something like 95% reusable after the lifetime of the product, so truly sustainable. Sorry nuke fans! Just look at this report from Lazard.
Image from Lazard report (USA data). |
Update 29 April Luke Stark made a similar point a couple of years ago (2019), Facial recognition is the plutonium of AI. I don't remember having read it, but I've got a (solicited) article in the same issue so I may have. Such are brains – we don't remember all of the giants whose shoulders' we stand on, and of course sometimes people just independently have similar ideas. Partly because there's no way to tell those two cases apart, in scientific writing you have to cite everyone who's done related work research. But the main reason you do that (in science) is not attribution but completeness – for the benefit of the reader.
One of the most annoying parts of being a PhD supervisor is spending a couple years trying to get someone to work on a problem, and then finally they decide to work on the problem but they think it's their own idea. Which means you were failing to communicate, and what they remember is the moment they finally filled in the holes and that felt like insight to them. At least, I hope it means that and not that PhD students are mean. :-)
Anyway, this is one reason that I freak out that people have actual university students read my blogposts rather than my formal publications. Publications go through peer review so not only I but also all my reviewers have had a chance to make sure that I'm tying into the right literatures. I think of blogs as more conversational and maybe pop-scientific. I sometimes put research ideas here that I know I'll never get around to working on in the hope someone else will. But in most of my posts I'm just trying to spell something out and communicate clearly to some particular audience that I think presently has the wrong end of a stick.
Comments
Perhaps it helps to consider that more people died from radiation creating the Manhattan Project than were killed by the bombs it dropped. That's not a story often told, but it helps put to rest these odd notions that nuclear is safest just because people aren't being scientific enough when it comes to risk, casualties and total harms.