Don't be evil. (Don't be brittle.)

When Demis Hassabis said he would join Google if they didn't work with the US military, I told BBC Newsnight that this was a red herring.  "Murder kills five times more people than war" was a short way to say there's a lot more to ethics than just avoiding the military, an obvious example being avoiding selling arms to paramilitaries (or school children.)  In fact, many military officers often really are major advocates for peace and stability, including policies like reducing developing-world diseases and poverty because they contribute to instability.

But by far the worst and most disturbing thing I heard in the many AI policy meetings I was invited to attend last year was in the only one in the US.  That was the Artificial Intelligence and Global Security Summit on Nov 1, 2017 in Washington, DC, hosted by the Center for a New American Security.  Note: the CNAS have videos and transcripts of all the talks and discussions linked on that page.

Despite it supposedly being a global security summit, no one communicated any awareness of the wider world of interacting nations that are very concerned with the many aspects of AI policy with implications for security. This was in marked contrast to the other meetings on AI policy I attended hosted by people like the OECD, Montreal, and UN, the Red Cross and Chatham House, even the commercial World Summit AI.  In the DC meeting the only other country really mentioned was China, and then not as a government or people or culture or set of needs and desires, but more as some kind of an abstract singularity vortex notion of a competitor who might develop some aspect of AI before the USA.

But even more disturbing – the most frightening thing I witnessed that year – was a small number of immensely powerful people saying that we need to replace people with AI.   The first person to say that was Eric Schmidt, who said we should replace the military's "valuable eyes" with an algorithm.  According to the transcript, I challenged that in the very first question in the Q&A:
Audience Question: Hi, thank you. Thanks a lot for your talk. Joanna Bryson. I loved most of it, but of course I’m going to pick on the one piece that worried me. You were talking about replacing large numbers of military people with an AI vision system, could that be a single point of failure? Could that be something that could be hacked or manipulated? Whereas, you know, thousands of soldiers would presumably whistle blow if something weird was going on.
I remembered I'd challenged him, but I hadn't remembered I'd gotten that whole point out until researching this blogpost. I've been lying awake nights thinking I should find time to blog about this discussion every since the meeting. But Schmidt was not the only person saying this, nor the scariest. One of the several US generals in the room loudly and repeatedly said that we just can't trust people, they are erratic and unreliable, and we need fewer of them. This is a spectacularly dangerous thing to say.

I still don't really have time to blog, but there's four things I need to say today:
The fourth point I need to make is the most important – it's the brittleness point I made to Schmidt above.  If you replace a lot of people with one algorithm, it becomes a single point of failure. It can be deliberately altered by a corrupt organisation, it can be compromised by outside hackers, or it can just be worked around to generate fake news.  That's what happened when the Republicans convinced Facebook to replace their human editors with an algorithm for the Trump election. Note that I've deliberately chosen to link (from many available) a story two months after the change – which immediately brought fake news and stories about fake news – and a month before the election. Facebook should have known and could have fixed this; they are absolutely complicit in the election outcome.  And that election has lead to the US dismantling the infrastructure that defends the security of the US and of the rest of the world.  That link is the scariest thing I read rather than personally witnessed in 2017.

Facebook went back to lots of real human editors in December 2016. As I wrote over a month before I saw Schmidt, every organisation should have a core of diverse humans who talk directly to each other creating a chain from the general public to their chief executives.  Otherwise we are at risk.

To end on a positive, all the other policy events I attended in 2017 besides the DC event were great and showed significant improvement over 2016. One of the best things I saw in 2017 was the OECD's work on AI as part of their Going Digital project.  There are a lot of hype machines talking about AI that get picked up by the press (and tech billionaires) run by under-qualified people often unfortunately associated with leading universities. Don't get confused. The OECD and other organisations are actually giving governments good advice, and progress is getting made.

* James Vincent is awesome not only because of his writing on AI ethics, but because he also forced me to acknowledge Gizmodo for their story he anyway links to in his post. Again, it's easy to get down on journalists or government workers, but most of them are good people making the world work.