Concerns about Empathy and AI

I'm serving on the IEEE policy board that's trying to put together recommendations for Artificial Intelligence Ethics.  Here are two paragraphs that are at least draft outputs of the two subgroups I'm chairing – subgroups of the Affect group.  I'm very rushed now (flying to The Hague for ECAI) but thought it was worth posting these even without enough commentBut I should say the seeds of the autonomy concern came from notes during my visit to Miles Brundage at ASU just before AAAI last January – I've been meaning to blog more fully about that for some time (ever since); still hope to.

Concerning the Feelings of AI:

Deliberately constructed emotions are designed to create empathy between humans and artefacts, which may be useful or even essential for human-AI collaboration.  However, this could lead humans to the falsely identify with the AI, and therefore fail to realise that–unlike in evolved intelligence–synthetic emotions can be compartmentalised and even entirely removed.  Potential consequences are over bonding, guilt, and above all misplaced trust.  Because there is no coherent sense in which AI can be made to suffer (that is, made to permanently alter its behaviour due to adversive affective experiences), AI cannot be allocated moral agency or responsibility in the senses designed for human sociality.  We recommend that AI not be considered legally or marketed as responsible agents; and that its intelligence including its emotional systems be made transparent–not necessarily in real time, but that their working be available to inspection by any concerned and responsible parties.


Concerning Human Flourishing and Autonomy:

Our greatest current concern is a potential catastrophic loss of individual human autonomy.  As humans are replaced by AI, corporations, governments etc. can eliminate the possibility of employees and customers discovering new equilibria outside the scope of what the organisations' executive foresaw.  This can be seen conspicuously if you imagine someone yelling "call the police!!" to a ticket machine or ATM. It is also fairly obvious in the case of whistleblowing.  But we wish to emphasise that even in ordinary everyday working this disadvantages not only liberty but corporations and governments in their primary business, by eliminating opportunities for useful innovation.  Collaboration requires sufficient commonality of collaborating intelligences to create empathy – the capacity to model the other’s goals based on your own.  Although there will be many cases where AI is less expensive, more predictable, and easier to control than human employees, we recommend maintaining a core number of human employees at every level of management with easy access to each other, and to use these to interface with customers and other organisations.


See further on this latter concern about autonomy:

Comments