When Demis Hassabis said he would join Google if they didn't work with the US military, I told BBC Newsnight that this was a red herring. "Murder kills five times more people than war" was a short way to say there's a lot more to ethics than just avoiding the military, an obvious example being avoiding selling arms to paramilitaries (or school children.) In fact, many military officers often really are major advocates for peace and stability, including policies like reducing developing-world diseases and poverty because they contribute to instability.
But by far the worst and most disturbing thing I heard in the many AI policy meetings I was invited to attend last year was in the only one in the US. That was the Artificial Intelligence and Global Security Summit on Nov 1, 2017 in Washington, DC, hosted by the Center for a New American Security. (Note: the CNAS have videos and transcripts of all the talks and discussions linked on that page. Edit: Here's my question and Eric Schmidt's answer, thanks Bjørn Braum.)
Despite it supposedly being a global security summit, no one communicated any awareness of the wider world of interacting nations that are very concerned with the many aspects of AI policy with implications for security. This was in marked contrast to the other meetings on AI policy I attended hosted by people like the OECD, Montreal, and UN, the Red Cross and Chatham House, even the commercial World Summit AI. In the DC meeting the only other country really mentioned was China, and then not as a government or people or culture or set of needs and desires, but more as some kind of an abstract singularity vortex notion of a competitor who might develop some aspect of AI before the USA.
But even more disturbing – the most frightening thing I witnessed that year – was a small number of immensely powerful people saying that we need to replace people with AI. The first person to say that was Eric Schmidt, who said we should replace the military's "valuable eyes" with an algorithm. According to the transcript, I challenged that in the very first question in the Q&A:
I still don't really have time to blog, but there's four things I need to say today:
Facebook went back to lots of real human editors in December 2016. As I wrote over a month before I saw Schmidt, every organisation should have a core of diverse humans who talk directly to each other creating a chain from the general public to their chief executives. Otherwise we are at risk.
To end on a positive, all the other policy events I attended in 2017 besides the DC event were great and showed significant improvement over 2016. One of the best things I saw in 2017 was the OECD's work on AI as part of their Going Digital project. There are a lot of hype machines talking about AI that get picked up by the press (and tech billionaires) run by under-qualified people often unfortunately associated with leading universities. Don't get confused. The OECD and other organisations are actually giving governments good advice, and progress is getting made.
* James Vincent is awesome not only because of his writing on AI ethics, but because he also forced me to acknowledge Gizmodo for their story he anyway links to in his post. Again, it's easy to get down on journalists or government workers, but most of them are good people making the world work.
But by far the worst and most disturbing thing I heard in the many AI policy meetings I was invited to attend last year was in the only one in the US. That was the Artificial Intelligence and Global Security Summit on Nov 1, 2017 in Washington, DC, hosted by the Center for a New American Security. (Note: the CNAS have videos and transcripts of all the talks and discussions linked on that page. Edit: Here's my question and Eric Schmidt's answer, thanks Bjørn Braum.)
Despite it supposedly being a global security summit, no one communicated any awareness of the wider world of interacting nations that are very concerned with the many aspects of AI policy with implications for security. This was in marked contrast to the other meetings on AI policy I attended hosted by people like the OECD, Montreal, and UN, the Red Cross and Chatham House, even the commercial World Summit AI. In the DC meeting the only other country really mentioned was China, and then not as a government or people or culture or set of needs and desires, but more as some kind of an abstract singularity vortex notion of a competitor who might develop some aspect of AI before the USA.
But even more disturbing – the most frightening thing I witnessed that year – was a small number of immensely powerful people saying that we need to replace people with AI. The first person to say that was Eric Schmidt, who said we should replace the military's "valuable eyes" with an algorithm. According to the transcript, I challenged that in the very first question in the Q&A:
I remembered I'd challenged him, but I hadn't remembered I'd gotten that whole point out until researching this blogpost. I've been lying awake nights thinking I should find time to blog about this discussion every since the meeting. But Schmidt was not the only person saying this, nor the scariest. One of the several US generals in the room loudly and repeatedly said that we just can't trust people, they are erratic and unreliable, and we need fewer of them. This is a spectacularly dangerous thing to say.Audience Question: Hi, thank you. Thanks a lot for your talk. Joanna Bryson. I loved most of it, but of course I’m going to pick on the one piece that worried me. You were talking about replacing large numbers of military people with an AI vision system, could that be a single point of failure? Could that be something that could be hacked or manipulated? Whereas, you know, thousands of soldiers would presumably whistle blow if something weird was going on.
I still don't really have time to blog, but there's four things I need to say today:
- The awesome James Vincent at the Verge wrote an article confirming Gizmodo's account, that Google is in fact directly helping the US military replace their valuable, unhackable, and possibly whistle-blowing eyes.*
- I had heard that Schmidt was out of Google so hoped they had excluded him for his new military bent. I see he stepped down from both Google and Alphabet boards, the latter a month after I saw him, to "focus on philanthropy" – and technical advising. Actually, he's the chair of the Pentagon's Defense Innovation Advisory Board.
- The original version of the incredible Carole Cadwalladr exposé, The great British Brexit robbery: how our democracy was hijacked (is also about Trump) links Schmidt to Cambridge Analytica, through his daughter. I've heard that the Guardian keeps shrinking the online version of that article because they can't afford to fight lawsuits, but you can find the original in your library or on the Wayback Machine, or just search for Schmidt on this blog page.
Facebook went back to lots of real human editors in December 2016. As I wrote over a month before I saw Schmidt, every organisation should have a core of diverse humans who talk directly to each other creating a chain from the general public to their chief executives. Otherwise we are at risk.
To end on a positive, all the other policy events I attended in 2017 besides the DC event were great and showed significant improvement over 2016. One of the best things I saw in 2017 was the OECD's work on AI as part of their Going Digital project. There are a lot of hype machines talking about AI that get picked up by the press (and tech billionaires) run by under-qualified people often unfortunately associated with leading universities. Don't get confused. The OECD and other organisations are actually giving governments good advice, and progress is getting made.
* James Vincent is awesome not only because of his writing on AI ethics, but because he also forced me to acknowledge Gizmodo for their story he anyway links to in his post. Again, it's easy to get down on journalists or government workers, but most of them are good people making the world work.
Comments