Is There a Human-AI Relationship?

I was asked to make a "short intervention" kicking off a workshop that was part of the Tilburg Institute for Law, Technology, and Society (TILT)'s 25th Anniversary celebration, to be specific, part of their September celebration which was entitled:
    TILT 25 - Reconfiguring Human-AI relationships 

    Mine was the first intervention. Here's a slightly elaborated version of my notes for that talk.
    Tilburg University – spot the ambient AI
    • I forgot to say earlier [when we'd been asked as part of introductions] why I came to this workshop / TILT. Actually, I said "no" the first time I was emailed, maybe the first few times they asked, but I also (like Andreas Theodorou) love the NL, and Europe more generally, and TILT (especially Aviva de Groot) were very persuasive that they needed me for the projects they wanted. Also, I was looking forward to working with John Danaher, he's someone I can't really work well with just on Twitter in little snips but I think we could make good progress if we worked together in person. I wish him the very best with the activity that took him away from this meeting.
    • I'm inclined to say that there's only one real relationship between AI & humans, that of the moral agents to their artefacts, the relationship of authorship.  As we heard in Merel Noorman's excellent introduction, there are enormous problems of those pretending to treat artefacts as equals, it makes us wonder if they really consider those of other genders or races as equals at all. This is what I've always said in the past, but I realise now I've been wrong. I'll get back to how I've been wrong, but first let me elaborate why I'm still inclined to say that kind of thing.
    • Cooperation is ordinarily a relationship between equals. You don't cooperate with your dog, and you don't really cooperate with small children. You tell children to cooperate with you, and you work around them, but you are the planner. Do the people who pretend AI is an equal even realise that perhaps they are also pretending they take other people as seriously as they take themselves? 
    • But our relationship to AI is nothing like that to dogs & children; we author AI fundamentally, so there is a difference in responsibility; but also when we create machine intelligence we have the potential to create a completely different set of goals, representations, even abstractions.  Even if machines learn for themselves in exactly the contexts we do, machines create totally different models with totally different blindsides and gaffs that we cannot actually access from common sense or "empathy". CMU scientists have been running around saying that their staged machine vision hacks with "adversarial glasses"  demonstrate that there is something weird and unpredictably fragile about AI and ML. This is nearly true – in fact, all intelligence and learning is like this, it all has odd nonlinearities. The difference is, we've had millions of years to get used to and evolve more hacks around the mistakes organisms like ourselves make. We therefore really cannot reason about machines as we do about each other, by trying to imagine how we would feel.
    • I get why people want relationships with AI. People feel safer with those they can empathise with; and people are strongly motivated to create new life, to persist and to project into the future. (This is incidentally why empathy is actually a terrible source of ethical intuitions – it biases you towards those more like yourself.) People want powerful allies and super successful children. We are programmed by biology to want that, and so we are very likely to project this onto machines. Particularly when marketers are exploiting these desires.
    • Here I want to come back to my initial inclination, to say it is entirely authorship. My inclination is showing my complete failure of empathy for  the vast majority of humanity. Most people are not authoring their AI at all. They are encountering it as an other, or else having it pervade their space without even realising it is altering their capacities and the capacities of others. In fact, the vast majority of AI is "transparent" in the HCI sense, which means invisible, which I have argued is not transparent at all. Anyway, actually, the vast majority of AI any of us use, even developers at Google or Microsoft, we didn't write ourselves. Ambient AI is pervading and altering all of our lives.
    • Nevertheless, the words "relationship" and "companion" have no place in discussing how technology alters our lives. I got famous in AI ethics for the title "Robots should be slaves". That title was written for an article because the title of the book was "Artificial companions", and I thought that was very wrong. Nearly the first sentences there say "look, 'companions' is the wrong metaphor for things we buy and sell". [Of course, 'slave' is the wrong word for something that's not a person. I just thought we'd all agreed people should never be owned / slaves, but I was wrong that that changed the word's meaning.]  In my mind, companionship and moral relations are negotiated equilibria established between equals, or at least near equals. [In q&a, we discussed that companionship may extend to subordinates who are sufficiently near to equal / powerful to require negotiation rather than direct control. So for example a family may think of their dog as a companion and negotiate with it, but the city or state only negotiates with the adult human parts of a household.]
    • But this may be another way I was wrong; in allowing corporations to hide their efficacy through AI behind our desire to anthropomorphise, we may be relegating humans down to the level of machines. We are making ourselves ever tinier and more regular cogs in enormous institutional engines, as Frischmann & Sellinger recently described. This is something we should combat, by regulating for accountability of the corporations, governments, and owner/operators, and transparency of the technology that supports that. If this is true, then actually we are becoming more equal to AI, but on the other hand we are also losing our capacity to really freely negotiate, so that may still not make us really companions to it.
    • With respect to anthropomorphism, my old concern was that it prevents people from understanding AI. I was hoping as we learned tricks for increasing transparency we could worry less about anthropomorphism and embrace these metaphors with no more hazard than gendered languages embrace sexism.  My new concern since coming to TILT and talking with people like Olya Kudina and Silvia De Conca for 1.5 weeks is that we are programmed by biology such that we will necessarily be affected implicitly by anthropomorphised AI. Even NLP (particularly speech generation) may be convenient in the moment, but deadly to democracy and human flourishing
    • Possibly, there's no alternative. Maybe we are doomed by biology to become ever more like cells and bacteria running around in the "bodies" of nations, religions, corporations and so forth. I hope not. I hope we can harness technology to defend our liberty and extend our capacities and autonomy. Even natural selection has found ways to defend diversity, because diversity generates agility which is useful for robustness.
    • So I have several new realisations:
      1. that most humans do not author AI, 
      2. that we may be becoming more peer to AI, and 
      3. that we are altered by the anthropomorphic implicitly and inevitably. 
      Nevertheless, I am now even more certain we should never talk about trust, cooperation, or responsibility in the context of interacting with intelligent artefacts. But I also think we need to be working very, very hard on how to accommodate as well as correct the fact that these technologies are presently being presented as companions, and altering what it is to be human.

    Comments