What we lost when we lost Google ATEAC

This picture links to an excellent article on facial
surveillance, which is used in Chinese classrooms.
Including Uighur "reeducation" camps.
In a few weeks, the Advanced Technology External Advisory Council (ATEAC) was scheduled to come together for its first meeting. At that meeting, we were expected to "stress test" a proposed face recognition technology policy. "We were going to dedicate an entire day to it" (at least 1/4 the time they expected to get out of us.)  The people I talked to at Google seemed profoundly disturbed by what "face recognition" could do.  It's not the first time I've heard that kind of deep concern – I've also heard it in completely unrelated one-on-one settings from a very diverse set of academics whose only commonality was working at the interface of machine learning and human computer interaction  (HCI).  It isn't just face recognition. It's body posture, acoustics of speech and laughter, the way a pen is used on a tablet, and (famously) text.  Privacy isn't over, but it will never again be present in society without serious, deliberate, coordinated defense.

I wanted to ask whether Google was really only thinking about face recognition or whether their policy covered these other means of learning about individuals. Of course, I also wanted to know what if anything new Google had discovered.  But I only had so much time so I asked questions that would quickly let me assess whether the loss of ATEAC was really a loss.  Those questions were basically who do you think I am, and what do you think I could have done?  Because I wanted to assess how much they knew and how realistic their expectations were, and I knew more about myself than any other metric I could think of.

For Google (and therefore all of us, since we're all affected by Google), part of the serious, deliberate, coordinated defense against the negative impacts of AI is their own internal policy. The tech giants in general realize that while governments have a critical role in determining and enforcing what's legal, they have their own responsibilities, challenges, and affordances as immensely powerful transnational forces. As Brad Smith said at Aspen in 2017 (and probably a lot of other places, and I'm just paraphrasing here) "The government is important and we respect that, but they have to recognize that we are the battlefield. The war is being fought on us." In other words, tech has a responsibility to itself and to the rest of us to act immediately on the information it acquires.  Tech must comply with law when the law comes, but they can't just sit and wait for the law to come.

What Google wanted from ATEAC was to "stress test" the policy they'd come up with internally. They said they chose their external advisory council on the basis of several factors:

  • Knowing things that Google doesn't know or do in-house.
  • Diversity, sampling across broad spectrums. (The Googlers I knew said the company doesn't believe in binaries, so no one was meant to represent a particular class).
  • Being extremely likely to be forceful, clearly articulated, and critical --- to say exactly what they thought regardless of implications, political correctness, etc.
  • Yet also, being the kind of people who could sit down at a table and listen, who cared enough about being right to update their positions when they learned new things, and were sufficiently respectful and cordial that all voices would be heard.
I have to admit that I hadn't paid a lot of attention when the email came that finally revealed to us the other council members – it was important, but as usual I was busy, so I put off looking it up until immediately before the meeting. When things started blowing up in social media, I spoke either to directly, or about with someone who worked in their sector, everyone on the committee that I didn't already know.  I'd guessed before talking to my contacts that Google had chosen people who were outspoken, controversial, and clearly articulated.  As far as I can tell, Google put together exactly the advisory council they wanted, and there would have been excellent, well-informed stress testing of their policy. They might possibly even have gotten a consensus to act on.  We'll never know.

Google presumably still has its un-stress-tested policy, and they will probably think of another way forward on getting external advice. But I believe everyone – especially including members of the sort of disadvantaged communities that were used as the reason to attack the ATEAC – will be less safe next month than they would have been if the council had met.



Things to note: It was an advanced technology external advisory council, not an ethics board. There was no reason for everyone to know about ethics, or for everyone to know about ML or policy or law or philosophy. In fact, the point was for us to know as many different things as possible, including having as different of lived experiences with respect to our individual identities as possible. There was never meant to be executive control of Google by the council. The council was meant to think differently than Google – to be people Google would never have in house.

Related recent post: Bullying and shunning are problems, not solutions.


Addendum (19 April 2019) new Nature Human Behaviour paper (not by me!) The wisdom of polarized crowds

Comments

Anonymous said…
Incredible points. Great arguments. Keep up the great spirit.