There are at least three broad categories of useful forms of explanation for AI:
- Explaining human actions that led to the system being released and sold as a product and / or operated as a service. Since humans are the only beings our law can hold accountable, this is a very useful form of explanation for establishing why things happened in terms of liabilities (legal or tax), praise, and a general emotionally-gratifying "how could this happen?" kind of answer. It can also tell you how you could be better using a system, or what to request in terms of changes to a system if you own or license it. All commercial AI should have this. So should the rest of commercial software. See further my blog on governance of AI or several of my 2019 AI ethics publications.
- Explaining what inputs resulted in what outputs. Actually,
this describes two kinds of explanation:
- Even if a system is entirely the opaque kind of "black box", you can still try putting a bunch of related inputs in and find out what makes the output change. This allows you to check for example whether you would have gotten a loan if you had been a little taller or younger or so forth. This is sometimes called "digital forensics." Probably all AI that interacts with people that didn't build it should have this.
- Or for robots like driverless cars, you can keep the airplane kind of "black box" that records logs of inputs and decisions taken by the system for later debugging. For robots at least, because such a black box is likely to have incidental personal data, this kind of black box should overwrite its old data regularly. For other kinds of systems e.g. tax or loan services, it might be worth keeping such logs around for some years, though obviously cybersecured. Probably all AI that affects decisions concerning people that didn't build it should have this.
- Seeing exactly how the system works. This is what people
often mean by "explanation", but it's not necessarily more
useful than–or even as useful as–the previous two broad categories, depending on your use case. But
if you as a developer can actually understand all the components of your working system, then not only can
you explain such details to (at least incredibly well-educated) users, you may
also be better able to understand, maintain, and debug your own system. This broad category of explanation also has a
couple of sorts:
- Using the kind of representations to encode the AI that humans can read. For example, production rules, logic, decision trees, etc. Note that sometimes such systems will be so large and complex that they may not actually be very understandable anyway. Very few people can look at an orchestra score and know exactly how it sounds, maybe no one can really experience the qualia of polyphonous music that way. Gazillions of lines of computer code aren't exactly like an orchestra score, but hopefully you see my point.
- Fitting more transparent models to less transparent models
where the AI is being generated by models learned by a less
transparent system e.g. a deep neural network (DNN) learning system. Actually, this is how
human intelligence works. We often don't really know why we do something, and we certainly don't and couldn't consciously
determine every gesture thought or word choice. But we learn
to guess why we are doing things based on models we acquire
partly from our own experience, but quite a lot from being
taught. So why you think you do what you do will depend a
lot on what you've been told about yourself and about other
human beings. If you're lucky (and you care enough), you'll
keep learning more about how your own mind works for your entire life. Anyway, back to machine learning, work on this can be found going back decades, including recent work by Zhoubin Ghahramani and Murray Shanahan.
Tallinn, Estonia – gratuitous picture for the blog index :-) |
The panel that drove the tweet that drove this blogpost. from left to right: Ben Cerveny, Steve Hsu, Nanjira Sambuli |
But maybe because Steve had been attempting to establish AI authority by wielding his physics degrees, my new response in a tweet was that worrying about "deep explanation" was like worrying about whether the molecules of a table were going to hold together or let a plate fall through them. We don't worry about molecules when we set a table, and we don't need to worry about exact DNN weights for regulating or even programming AI. It's just not the right level of abstraction.
Microsoft (or at least its engineers, I don't know if it was coordinated) used to try to run this kind of "deep explanation" interference (though they didn't call it that) against the regulation of AI. They stopped two years ago, because they realised that ethics is the flip side of liability. Given the success of accountability and transparency in soft law like the OECD/G20 Principles of AI, I'd say that we are getting this kind of message across pretty well. But unfortunately there are still a bunch of actors who rail against authority and order, even when it is a big part of what protects them and their precious power and money. They come up with narratives about how damaging the EU is or other governments. Sure, everything can and must constantly be improved, but some disruptions are actually a very bad idea. For example, most war.
Anyway, to recap:
- Everyone who writes software, with or without AI in it, using or not using AI techniques (including machine learning) to write the software or run the final system, should keep decent records of what they've done. This is good practice for their own ability to maintain the code and run their organisation, and it's essential practice for demonstrating due diligence.
- Most AI systems that are commercially released should have processes in place so that you can do forensics to check how and why they work, whether by feeding in large ranges of parameters to check when a decision would change, or logging inputs and outputs of the system, or both.
- If vendors are held accountable for the outcomes of using their software / intelligent system, they may choose also to get extra transparency by using more readily-comprehensible representations of their AI, but they may choose not to. They may also choose to build other, simpler models of how their software runs to provide more forms of explanation, for themselves or for others. I think it's OK to leave this up to them, and their lawyers.
Comments
Perhaps a better requirement would be that operators of AI systems should on demand provide justifications for their decisions. That is something that humans can be expected to do also; and it provides a basis for challenges, empirical or logical.
Joanna J. Bryson, solicited and reviewed for M. Dubber, F. Pasquale, & S. Das (Eds.), The Oxford Handbook of Ethics of Artificial Intelligence, Oxford University Press.
https://joanna-bryson.squarespace.com/s/the-artificial-intelligence-of-the-ethics-of-artificial-intelligence.pdf for a green open-access version.