Two ways AI technology is like Nuclear technology

 We often hear that AI is the new electricity and data the new oil.  The former at least is kind of true. AI
is already pervasive in all of our digital artefacts and interactions.  And to be fair, so is the latter. Data is dangerous to store, prone to leaking, and might lead to regulatory capture and political instability.

But I want to focus on a different analogy today, because I'm tired of people thinking that the problems of AI regulation can be solved by "AI 4 Good" projects. Look, of course it's good to be good. And since AI is now a pervasive technology, of course you can use AI to help you be good. And of course we frequently use regulation to improve our societies. But...

1. No amount of "good" applications counterweights "bad" applications.

Let's pretend for this first point that electrical energy from nuclear power plants is entirely clean, with no bad outcomes, no nuclear waste, etc. (foreshadowing: I won't make this assumption in the second point.) No matter how much clean, planet-saving electricity we could get from a world full of fully safe nuclear power plants, we would still need to regulate nuclear weapons and do everything we can to avoid nuclear war.  Similarly, all the good things we can and will do using AI doesn't mean we don't have to worry about regulatory capture by technology companies, loss of privacy or freedom of thought, the elimination of the capacity to build viable political opposition parties, and so forth.

2. Even "good" applications create hazardous byproducts.

US military watching a nearby nuclear explosion in a desert
The USA helpfully training their troops by giving
them direct experiences of atomic blasts.
(click pic for Wikipedia source & linked articles)
The truth however is that electrical power plants are used to create nuclear weapons, serve as enormous ecological risks as we've seen at Chernobyl and Fukushima, and produce extremely dangerous waste. It's been suggested that if you take into account the costs of construction and decommissioning, no nuclear power plant has ever made money. Germany has been shutting down its nuclear power plants. I honestly don't know which is a bigger threat to our well being–coal powered electricity or nuclear electricity. I'm pretty sure wind and solar are better than either. If there's no way to sustainably generate electricity, ultimately we will have to stop using it – that's what "sustainably" means; we won't have a choice. In the near term we have to work on reducing how much electricity we use, and also to reduce how much damage we do when we produce electricity for use, and further on how to counteract that damage.

Similarly, I don't know whether all the entertainment, education, security, and economic benefits of AI can outweigh the dangers of having the capacity to identify who is going to vote how, who is most likely to join a military or a militia or create an opposition party or blow a whistle. I'm pretty sure though that these are problems we need to guard against anyway – a lot of regimes just throw all opposition (and often all academics) into jail, or kill them. They haven't historically needed AI for that. 

So this is why I'm Professor of Ethics and Technology at the Hertie School now, to work on ensuring that the world we live in has the capacity to prevent bad applications of AI and heads off its unintentional bad outcomes, as well as for other (particularly digital) technologies. It's great that people want to do good things with AI – in fact, I personally use AI to build models to try to head off bad outcomes. But limiting, mitigating, and hopefully eliminating bad outcomes absolutely must be an essential component of AI regulation and governance. No amount of other good projects can outweigh that.

Update, October 2023  I now have an article on this: Going Nuclear? Precedents and Options for the Transnational Governance of Artificial Intelligence (with David Backovsky, for Horizons). I also wrote a whole blogpost on global AI governance

Update, 28 April 2021  There has been a somewhat predictable debate in the comments that we hashed out on twitter. Both solar and wind power are now insanely cheap and the technology is something like 95% reusable after the lifetime of the product, so truly sustainable. Sorry nuke fans! Just look at this report from Lazard.

Image from Lazard report (USA data). 

Update 29 April Luke Stark made a similar point a couple of years ago (2019), Facial recognition is the plutonium of AI. I don't remember having read it, but I've got a (solicited) article in the same issue so I may have. Such are brains – we don't remember all of the giants whose shoulders' we stand on, and of course sometimes people just independently have similar ideas. Partly because there's no way to tell those two cases apart, in scientific writing you have to cite everyone who's done related work research. But the main reason you do that (in science) is not attribution but completeness – for the benefit of the reader.  

One of the most annoying parts of being a PhD supervisor is spending a couple years trying to get someone to work on a problem, and then finally they decide to work on the problem but they think it's their own idea. Which means you were failing to communicate, and what they remember is the moment they finally filled in the holes and that felt like insight to them. At least, I hope it means that and not that PhD students are mean. :-)

Anyway, this is one reason that I freak out that people have actual university students read my blogposts rather than my formal publications. Publications go through peer review so not only I but also all my reviewers have had a chance to make sure that I'm tying into the right literatures. I think of blogs as more conversational and maybe pop-scientific. I sometimes put research ideas here that I know I'll never get around to working on in the hope someone else will. But in most of my posts I'm just trying to spell something out and communicate clearly to some particular audience that I think presently has the wrong end of a stick.

Comments

Valdis said…
I don't know how dangerous could AI be, but nuclear power is the safest form of energy production, even considering probable cases of additional cancers from increased radioactivity (according to https://en.wikipedia.org/wiki/Linear_no-threshold_model model, which is highly doubted by scientific community). Please learn more at https://ourworldindata.org/nuclear-energy, https://whatisnuclear.com/, http://www.unscear.org/unscear/en/chernobyl.html and http://www.unscear.org/unscear/en/fukushima.html
Joanna Bryson said…
Thanks Valdis. I do know that coal power is really horrific, and as a technophile I really hope that both nuclear power and AI can be used to help us make a sustainable future. I also know that solar power at least also has harmful chemical byproducts. I hope better transparency for informed decisions will be a positive outcome of the information age, that's one of the things we're fighting for (cf https://joanna-bryson.blogspot.com/2016/06/knowledge-is-power-truth-in-information.html ) The question is what living in society will feel like once we've achieved that. I believe it's likely to vary a lot by jurisdiction and political approach.
Anonymous said…
Saying nuclear power is the safest seems wildly inaccurate and misleading. It's always based on wild assumptions about quality control, which end up forming a logical fallacy (tautology). If absolutely everything is done perfectly by nuclear then nuclear is safest, sure. Except that's so unrealistic as to be fantasy talk -- the many nuclear accidents are the obvious proof and counterpoint. And whether you go high or low on the causalities counts (high being well over 1000X the low numbers) the point is these nuclear safety counts are extremely messy and imprecise. Can't claim to both be safe because absolutely precise methods and then generate wildly varying estimates of harm. And the harms have been common because instead of one or fewer core damage events (based on nuclear industry projections) in reality there have been at least eleven. Three Mile and Chernobyl both were a function of human error and the risk models have failed spectacularly to account for human error.

Perhaps it helps to consider that more people died from radiation creating the Manhattan Project than were killed by the bombs it dropped. That's not a story often told, but it helps put to rest these odd notions that nuclear is safest just because people aren't being scientific enough when it comes to risk, casualties and total harms.