Some definitions of some words used variously in my field. My intent is to provide definitions most useful for interpreting the five OECD Principles of AI, since that's the soft law with the most international support, with 42 governments (initially, more now I think) plus the G20 signed up.
- Responsibility is a property assigned by a society to individuals for their actions (including inactions where action was the expected norm.) Actors with responsibility are technically termed moral agents in philosophy. These vary by society. So for example, a family may hold a child or a dog responsible to know where an appropriate place is to pee. But a government will only hold legal persons responsible. In the case of a family, those will be the adult humans in the household.
- Accountability is the capacity to assign responsibility to the correct agency. The purpose of accountability and responsibility are to maintain social order, that is, to maintain the society. Therefore responsibility is ordinarily assigned to those who can be held to account. The purpose of holding people to account is to persuade them and others like them to perform what a society considers to be their responsibilities for maintaining that society. Sometimes responsibility is determined to fall outside the control of any agency. In the USA for example, weather events are frequently termed "acts of God."
- Transparency is the property of a system whereby it is possible to trace accountability and allocate responsibility. Where there is transparency, there does not need to be blind trust. Formally, trust is the expectation of good behaviour afforded (by a truster) to others (trustees) where the trustee's behaviour is actually unknown and where the trustee actually has autonomy with respect to the truster. Transparency may create a sensation of trust, but it renders the formal state of trust unnecessary. Transparency is something that can and should be designed into an intelligent system.
There are a couple important related concepts I don't really think have come together into accepted terms. One thing I call brittleness but I've seen others call fragility is the breakdown of clear links between humans that are necessary for accountability. The other is explanation which is a form of transparency. I have argued (see below) that the way you know you have enough of an explanation or enough transparency is when you can assign accountability.
Here are a few of my (or my collaborators') more formal academic writings on the above topics:
- Wortham, R. H., Theodorou, A., & Bryson, J. J. (2017). Improving robot transparency: real-time visualisation of robot AI substantially improves understanding in naive observers. In 2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN) (pp. 1424-1431).
- Theodorou, A. AI Governance Through a Transparency Lens. Ph.D. Thesis, University of Bath, 2019.
- P7001 - Transparency of Autonomous Systems (IEEE)
- Bryson, J., & Winfield, A. (2017). Standardizing ethical design for artificial intelligence and autonomous systems. IEEE Computer, 50(5), 116-119.
- Bryson, J. J., & Theodorou, A. (2019). How Society Can Maintain Human-Centric Artificial Intelligence. In Human-centered digitalization and services (pp. 305-323). Springer, Singapore.
- Bryson, J. J., Diamantis, M. E., & Grant, T. D. (2017). Of, for, and by the people: the legal lacuna of synthetic persons. Artificial Intelligence and Law, 25(3), 273-291.
- Rauwolf, P., & Bryson, J. J. (2018). Expectations of Fairness and Trust Co-Evolve in Environments of Partial Information. Dynamic Games and Applications, 8(4), 891-917.
- Bryson, J. (2018). No one should trust artificial intelligence. United Nations University, Science & Technology: Innovation, Governance, Technology, 11, 14.
- It's (still) not about trust: No one should buy AI if governments won't enforce liability, blog from 23 November 2020.
- Responsibility and moral agents:
- Bryson, J. J. (2018). Patiency is not a virtue: the design of intelligent systems and systems of ethics. Ethics and Information Technology, 20(1), 15-26.