We often hear that AI is the new electricity and data the new oil. The former at least is kind of true. AI
is already pervasive in all of our digital artefacts and interactions. And to be fair, so is the latter. Data is dangerous to store, prone to leaking, and might lead to regulatory capture and political instability.
But I want to focus on a different analogy today, because I'm tired of people thinking that the problems of AI regulation can be solved by "AI 4 Good" projects. Look, of course it's good to be good. And since AI is now a pervasive technology, of course you can use AI to help you be good. And of course we frequently use regulation to improve our societies. But...
1. No amount of "good" applications counterweights "bad" applications.
2. Even "good" applications create hazardous byproducts.
|The USA helpfully training their troops by giving|
them direct experiences of atomic blasts.
(click pic for Wikipedia source & linked articles)
|Image from Lazard report (USA data).|
Update 29 April Luke Stark made a similar point a couple of years ago (2019), Facial recognition is the plutonium of AI. I don't remember having read it, but I've got a (solicited) article in the same issue so I may have. Such are brains – we don't remember all of the giants whose shoulders' we stand on, and of course sometimes people just independently have similar ideas. Partly because there's no way to tell those two cases apart, in scientific writing you have to cite everyone who's done related work research. But the main reason you do that (in science) is not attribution but completeness – for the benefit of the reader.