We're making progress in robot policy

In many discussions, notably recently the last day of the AAAI Spring Symposium Ethical and Moral Considerations in Nonhuman Agents, but even around the table on kickoff day for new fellows at the Princeton Center for Information Technology Policy we have discussed not only the difficulty of helping to form policy, but even of being sure what is policy.

In particular, I was confused about the status of the EPSRC Principles of Robotics.  They're a web page, and an agreement reached by a set of experts, but are they policy?  They were meant to correct and replace Asimov's Laws, which the general public often thinks is the solution to AI ethics, but surely those are just fiction, not policy?

As I blogged at length recently, I've come to realise that anything is policy that guides action, so the EPSRC Principles are policy exactly to the extent that governments, reviewers, and corporations take them to be necessary guidelines.

I'm excited to say that now the Principles have helped guide some action that will hopefully guide further action. The new (April 2016) British Standard 8611:2016Robots and robotic devices: Guide to the ethical design and application of robots and robotic systems  has been heavily influenced by the principles, for example including deception (intentional or unintentional) and anthropomorphism as ethical hazards.


I was a little involved, attending two or three meetings as a "participating observer", but I left the UK for my sabbatical before the standards were finalised.  Much of the credit for keeping the Principles in the public mind and therefore making them policy goes to Alan Winfield, who was (like me) one of the coauthors of the original Principles, and also a part of the BSI working group.  

1