How do we hold AI itself accountable? We can't.

I had two really important AI ethics paper come out last year. The Science paper on semantics and prejudice I blogged about at least four times, but I just realised that I've never really blogged about the paper on legal personhood that I wrote with two leading law professors in legal personality, Tom Grant of Cambridge and Mihailis E. Diamantis of Iowa. Since they had way more influence on the paper than I did, I can sincerely and humbly say that it is just a great paper and everyone should read it.  Of, for, and by the people: the legal lacuna of synthetic persons.  Long term, I think it may have more impact than the Science paper, to be honest.

Update: I've now written a more extended academic book chapter with the same title as this blogpost.

I just had an email about that paper, and it offered me a chance to write about the paper more succinctly.  I repeat the letters here.

Anonymised initial email: "I’m writing to you in view of your article “Of, For and by the people: the legal lacuna of synthetic Persons. What are some of the mitigation measures that should be in place to ensure synthetic persons are legally accountable for their acts in case they are granted electronic personhood?"

Here is my response:

Thank you for your interest in our work. I trust you have read the paper to which you refer? Since it is open access I hope you have no trouble getting a copy, but if you do have trouble let me know; I can even send a hard copy if necessary.  The reason I ask is that the point of our article is that there is no way to ensure that a synthetic person can be held legally accountable. Although many people think the purpose of the law is to compensate, it is really maintains order by dissuading people from doing wrong by making it clear what the costs are for doing wrong.  However, none of the costs that courts can impose will matter to an AI system. While we can easily write a program that says "Don't put me in jail!" the fully systemic aversion to the loss of social status and years of one's short life that a human has cannot be programmed into a synthetic device.

Law has been invented to hold humans accountable, thus only humans can be held accountable with it. Even the extension of legal personality to corporations only works to the extent that real humans who have real control over those corporations suffer if the corporation is to do wrong. Similarly, if you build an AI system and allow it to operate autonomously, it is essential that the person who chooses to allow the system to operate autonomously is the one who will go to jail, be fined, etc. if the AI system transgresses the law. There is no way to make the AI system itself accountable.

Having said that, it is quite easy to make people who use AI accountable, more so than within ordinary human organisations. What I recommend is requiring that the way that the systems is built, and if it has machine learning, is trained, is fully documented and that documentation is encrypted and guaranteed. Further, many of the operations of the system -- its decisions and what it perceived when it made those decisions which determined those outcomes, can be recorded, a process that is called logging.  This can make the system accountable in the sense that you can do accounting with the AI system, like you can use books to make a company accountable for its finances. But the true executive of that company is the one that has to be held responsible with the evidence gathered from these methods, whether conventional books or digital logs of AI.


Here are two previous blog posts where I mentioned the above paper, though did not really talk about its content as I just have.
Some previous posts more to the topic of the paper (that brought me to write it):

Tom also bought me fish & chips.
(We haven't met Mihailis IRL yet)

Comments

zahid said…
This comment has been removed by a blog administrator.