Responsible AI

I actually hate the term "responsible AI" because obviously it sounds to most people like the AI itself is responsible. But if you're going to use the phrase anyway...

Responsible AI is not a subset of AI. Responsible AI is a set of practices that ensure that we can maintain human responsibility for our endeavours despite the fact we have introduced technologies that might allow us to obfuscate lines of accountability. Transparency must be a characteristic of all digital systems. Developers and operators of AI systems must ensure adequate means to trace accountability through their systems so that any problems can be identified as unfortunate, negligent, or of malicious intent, and the intent must be traceable to the accountable entity – developer, owner/operator, criminal entry. The burdens of providing transparency should be proportionate to the economic impact of the system.

See also Responsibility, Accountability, Transparency (trust, brittleness, explanation) for detailed definitions of these terms.





Comments