Concerning the Feelings of AI:
Deliberately constructed emotions are designed to create empathy between humans and artefacts, which may be useful or even essential for human-AI collaboration. However, this could lead humans to the falsely identify with the AI, and therefore fail to realise that–unlike in evolved intelligence–synthetic emotions can be compartmentalised and even entirely removed. Potential consequences are over bonding, guilt, and above all misplaced trust. Because there is no coherent sense in which AI can be made to suffer (that is, made to permanently alter its behaviour due to adversive affective experiences), AI cannot be allocated moral agency or responsibility in the senses designed for human sociality. We recommend that AI not be considered legally or marketed as responsible agents; and that its intelligence including its emotional systems be made transparent–not necessarily in real time, but that their working be available to inspection by any concerned and responsible parties.
Concerning Human Flourishing and Autonomy:
Our greatest current concern is a potential catastrophic loss of individual human autonomy. As humans are replaced by AI, corporations, governments etc. can eliminate the possibility of employees and customers discovering new equilibria outside the scope of what the organisations' executive foresaw. This can be seen conspicuously if you imagine someone yelling "call the police!!" to a ticket machine or ATM. It is also fairly obvious in the case of whistleblowing. But we wish to emphasise that even in ordinary everyday working this disadvantages not only liberty but corporations and governments in their primary business, by eliminating opportunities for useful innovation. Collaboration requires sufficient commonality of collaborating intelligences to create empathy – the capacity to model the other’s goals based on your own. Although there will be many cases where AI is less expensive, more predictable, and easier to control than human employees, we recommend maintaining a core number of human employees at every level of management with easy access to each other, and to use these to interface with customers and other organisations.
See further on this latter concern about autonomy:
- Artificial Intelligence and Pro-Social Behaviour a book chapter in the October 2015 Springer volume, Collective Agency and Cooperation in Natural and Artificial Systems: Explanation, Implementation and Simulation edited by Catrin Misselhorn. Open access: here's the post-review submitted version from September 2014, or email me for the corrected final.
Comments