Invisibility Is Not Transparent

I'm attending (by invitation) the Applied Machine Learning Days at EPFL, and I've just heard my Princeton colleague Olga Russakovsky give her amazing talk about many different human issues of machine learning.  But one of her turns of phrase finally put together for me something that's been bothering me: the concept of establishing trust in AI.

Trust is a relationship between two people who do not have certain knowledge of or control over each other. It is the belief that someone who is effectively your equal or even superior is playing by the long-negotiated rules our culture has established that allows us to work in coherent groups. Knowing certainly how someone will behave actually reduces trust in human populations, because there is no longer motivation to maintain the associated costly behaviours of true trust.

Transparent AI is AI that people can understand. Open source code for the AI algorithm is neither necessary or sufficient for transparency in this sense.  Being comprehensible doesn't involve showing the connections of your synapses, it requires honest but abstract indications of your current status.

What Olga was talking about was how when you get an error from an AI system, if it's an error that a human would never make (e.g. thinking a person is a cat), you lose trust in the system. I know she has real, practical concerns about when her data annotators give up on their project. But addressing this problem by making the AI appear more human is the opposite of transparency. Transparency is allowing people to know that AI is an artefact, a system that's part of business process, or of an academic project. AI can have perfect memory, it might be connected to a cloud. If so the data you give it becomes a business asset that can be bought by anyone, particularly if the company that owns it goes bankrupt.

The AI that monitors and processes our behaviour when we use Facebook or, "intelligent speakers" (really, microphones), or even just a search engine is not human; it is in many ways superhuman. Making this invisible is not making AI transparent – it is making AI a potential hazard.  If we want people to safely use AI, we should make it transparent, which includes making it apparent. If we ask people to trust AI, we are anthropomorphising AI. Anthropomorphising AI is not transparent; it's just wrong.  No AI is a living ape; it's not a human being. AI systems overall capacities will always be different than an individual human's capacities, however many individual human capacities we are able to replicate and include in a single system.

Olga Russakovsky answers Q&A in the EPFL Conference Center, during the 2018 Applied Machine Learning Days

Comments