Should we let someone use AI to delete human bias? Would we know what we were saying?

Update:  a draft of our paper is now available too (as of 24 August), see related brief blogpost, Semantics derived automatically from language corpora necessarily contain human biases.  This work is now in published in Science as of 14 April 2017, see links below.
As some of you will know, my colleagues and I got somewhat scooped.  Aylin Caliskan, Arvind Narayanan and I are sitting on a ton of results showing that standard Natural Language tools have the same biases as humans.  This is a huge deal for ethics and cognitive science, because it means that children also could learn bias just by learning language, though with children we can also give them explicit instructions to consider every human to be just as important as they are.  Hopefully a draft of our paper will be available soon in arxiv.

However, even my computer scientist friends on Facebook are sharing an article Tech Review wrote about the phenomenon as it was revealed by some awesome work by Microsoft & BU.  We had heard about that effort a couple months ago – those guys have been working on this for years.  Of course, so have I, I've been giving talks about this model of semantics since 2001, and have published a few papers about it, notably Embodiment vs. Memetics (pdf, from Mind & Society, 7(1):77-94, June 2008), but also some work on how it relates to human biases I've done with undergraduates.  Aylin and Arvind have moved that work way further than I could have on my own, even with the amazing students we get at Bath, and I look forwards to sharing what we've done soon.

From Embodiment vs Memetics (my 2008 paper).
We learn what words mean by how they are used, then link some to concepts we've acquired from experience.

But I'm blogging now because Bolukbasi &al. were less interested in the cognitive science of the bias in AI ethics and more interested in fixing it.  They have used machine learning techniques to automatically suppress the links that our culture has given to historical regularities like which industries were willing to employ women.

This is a fascinating can of worms to open, and one that Aylin, Arvind & I had been discussing too.  If AI generates language using meanings that do not come directly from any human culture, then on the one hand that might bring about positive change.  We are language-absorbing machines; enough examples of "good" usage might change the way we talk and to some extent think. But who do we want to pick what our language is getting changed to?  Traditionally language change has been effected mostly implicitly, by evolution-like changes driven by both fashion and necessity, but also explicitly by influential academics, textbook and newspaper editors and publishers, and even government bodies like the Académie française.  Do we want new norms set now by technology companies?  Would such interventions be regulated by law?

Another problem: if AI tools don't use language like we do, it will necessarily make AI more alien and harder to understand.  Though maybe it's a good thing for a machine to be conspicuously a machine, this is a form of transparency.  But if the technology is being used to communicate in critical situations, it might be better to make it as comprehensible as possible.

So far, I think my coauthors and I have been more focussed on using technology to encourage human culture to change itself deliberately – using AI as a tool to provide people with metrics do that.  What Bolukbasi &al. are suggesting instead is that AI should be source of injected subliminal cultural change.

That's a big difference.  I look forwards to the debate!  But right now I'm getting back to the papers I'm writing.

Related publications:

Update April 2017 The MS/BU paper and ours hit arxiv about the same time last year.  The MS/BU one ultimately got into NIPSOur paper is now in Science.  I just found out about another related paper that came out in Cognition a few weeks ago.

Other AiNI blogposts on this work:

Comments