The meaning of the EPSRC principles of robotics

The meaning of the EPSRC principles of robotics

Joanna J. Bryson, University of Bath, and Princeton Center for Information Technology Policy
This was presented in the AISB Workshop on Principles of Robotics,  April 4th 2016, Sheffield UK and a lightly revised version has been accepted for publication in 2017 by Connection Science.

Update 27 April:  Deception (intentional or unintentional) and anthropomorphism are both listed as hazards in the new (April 2016) British Standard 8611:2016 – Robots and robotic devicesGuide to the ethical design and application of robots and robotic systems


In revisiting the principles of robotics, it is important to carefully consider their full meaning.  Here I briefly visit first the meaning of the document as a whole, then of its constituent parts.
The EPSRC principles of robotics were generated as a deliverable by a group assembled with little guidance and no deliverable required.  The original intention of the EPSRC robotics event seems to have been only the discussion itself, or perhaps even only the fact of the meeting. The academics present wanted something to show for their time spent, and as a result a substantial amount of time of all those present on the final day went into the creation of the three versions of the principles and their documentation.  Some of the documentation was extended–again by consensus–after the meeting.
It is right and fitting that there should be a way to examine and even update or maintain the document.  Even national constitutions have means for maintenance. However, it is critical to the efficacy of policy documents that they are not easy to change.  They should provide a rudder to prevent dithering, and as such are ordinarily more difficult to alter than they were to instantiate in the first place.  Note that some countries and other political unions have not found it easy to create even their initial constitutions for this very reason.  Therefore it's important to think carefully about the meaning of the principles.

The principles as policy

Technology policy, and policy more generally, is a surprisingly amorphous thing.  Like other aspects of natural intelligence, policy is not always found resident in the law or even governance. Much of policy is unwritten and even not explicitly known.  The UK is actually outstanding in its innovation of the common law, which acknowledges this and the importance of culture and precedent.  Nonetheless, in the cold light of a committee working on REF impact cases, we have to ask, are the principles policy?  I think the answer is "yes".  They are a set of guidelines agreed by a substantial if perhaps arbitrary fraction of the community they affect, and they are published on government web pages.
All policy has three components: allocative, distributive, and stabilising.  The allocative is the process of determining what problems are worth spending time and other resources on.  In the case of the principles, this was instigated by the EPSRC (or some organisation above them) out of concern that the British public might reject robotics as they had genetically modified food.  We were told the rejection of robotics was seen as a severe threat to the British economy.  Note also that each of the participants (at least those not specifically paid to attend) also made individual investments, allocating time to the problem of robot ethics, though for many this was confounded with an opportunity to get better known by their primary funding organisation.
The stabilising component is the one that ensures that the policy, once set, is incorporated into society in such a way that it is unlikely either to be quickly undone or to become much of a liability or matter of controversy.  In the case of the principles this has evidently been achieved at least to some level since we are celebrating their fifth anniversary.  From talking to other authors, I know of none entirely enamoured with the final product, but all respect the (admittedly representative) democratic process by which they were achieved, and the importance of their colleagues’ mutual commitment to the final product.  I for one would love to see the principles further reified into policy or even law, but I have yet to discover the process by which this might be accomplished.  However, they have been and are continuing to be drawn to the attention of various standards boards and parliamentary enquiries as well as of the press and other academics.
I leave for last the most controversial aspect of policy: the distributive.  At its base, all policy is about action selection, and that implies the allocation or rather reallocation of resources.  Politics tries to brush over this, since it necessarily goes against the grain of those from whom the resources are reallocated, even in the cases where those individuals stand to gain net benefit.  We hate to lose control, but policies are for control.  "Tries to brush over" is in fact an understatement; making redistribution palatable may be the core project of politicians.
In this case, the government had very specific concerns about individuals who had been in the media promoting fear of robots, and were very clear in their desire to find ways to shift media attention and public impressions towards the safety of robotics.  In contrast, it was really the participants who brought up the other major shifts from sensationalism to pragmatism --- the assertion that robots are not responsible parties under the law, and that users should not be deceived about their capacities.  The council representatives knew this redistribution of power would anger some of their outstanding funding recipients, and the participants knew the same about some of their colleagues.  Nevertheless, there was striking unanimity amongst the academics that the greatest moral hazards of robots was their charismatic nature and the incredible eagerness many people have to invest their own identity in machines', leading to striking confusion about their nature all of us had witnessed.  This charisma and confusion left the door open for all kinds of manipulations by corporations and governments, where the robots could be set up as responsible for – or even as surrogate for – human lives or values.

The principle of killing

Robots are multi-use tools. Robots should not be designed solely or primarily to kill or harm humans, except in the interests of national security.

The first three principles were intended as corrections of Asimov’s laws.  Robots are not responsible parties, so they could not kill. Instead, robots should not be usable as tools for killing.  This simple rule made the transfer of moral subjectivity clear, and simultaneously met the pacifist desires of most present.  However, pragmatically, robots were already used as weapons of war.  Laws that are unenforceable are generally considered to be of questionable or even negative utility.  We were persuaded that leading with a principle known to be false would significantly decrease our chances for cultural impact.  The meaning of the first principle might therefore seem neutralised by the compromise of the exception, but that robots are not to be weapons in civil society is still an important social point.  Beyond this, the fact that practical policy has to take into account the needs of the government to address both security and industry (as of 2014, the UK is the world’s sixth largest arms dealing nation) also has meaning.  However purely academic some of us may wish our discipline to be, the fact that many of its products have immediate utility means that we cannot avoid impact on our world.

The principle of compliance

Humans, not robots, are responsible agents. Robots should be designed & operated as far as is practicable to comply with existing laws & fundamental rights & freedoms, including privacy.

The second Asimov law has to do with following instructions, but even the notion of obeying implied moral agency.  The original meaning of this law was that robots are ordinary technology and conform to ordinary standards and laws.  In the shaping of the principles as a suite, the second principle came to be the one that communicated further some of the peril of AI in general, and AI mistaken for a moral subject in particular.  The emphasis on privacy reflects the special concern of a perceiving intelligent physical agent occupying the exact same space as a human family.  A robot is fundamentally immersed in the human umwelt, more than any previous technology or pet, perhaps even more so than some humans in a household such as children.  It has access to written and spoken language, social information, observed schedules etc.  Further, it may be mistaken for a pet or other trusted family member, its special abilities for perfect communication to the outside world temporarily forgotten, or its abilities to learn regularities and classify stimuli.  In these cases, private information may be unintentionally stored in a public cloud, or even a supposedly private cloud susceptible to hacking.  Forcing such a novel, human-like technology into compliance with standard, legal norms of privacy and safety is a non-trivial task.

The principle of commoditisation

Robots are products. They should be designed using processes which assure their safety and security.

The final Asimov law is self protection, but robots have no selves.  Instead this principle focussed on protecting humans from robots at the level of the robot’s basic soundness.  The principle again brings us into awareness of the non-special manufactured nature of the robot, in an attempt to head off avoidance of legal liability by claiming robots have a unique nature.  The manufacturer of a robot should have exactly as much responsibility for the machinery working to specification as the manufacturer of a car or a power tool.  In fact, robots might be cars or power tools, but if so they should be more rather than less safe than the conventional variety of either.

The principle of transparency

Robots are manufactured artefacts. They should not be designed in a deceptive way to exploit vulnerable users; instead their machine nature should be transparent.

The first three principles established the legal framework for the manufacture and sales of robotics as being identical to other products.  The last two are intended to ensure that status is also communicated to the user.  The principle of transparency seeks to ensure that individuals do not overinvest in their technology, for example hiring a house sitter to keep the robot from being lonely.  Some roboticists object to this principle because deception is necessary for the efficacy of their intended application, such as making people to not feel lonely so they are less depressed.  Others contend that this principle denies the possibility that robots should be more than ordinary machines. 
 The first argument is empirical, open to experiment.  First it needs to be established that there is no way to trigger emotional engagement without deception, which seems unlikely given the extent of emotional engagement that is established with fictional characters and clearly non-cognizant objects.  If a requirement for deception is experimentally established, then the tradeoff between the costs and benefits of deception can be debated.   The second however is incontrovertible.  The authorship we have over artefacts is a fundamental part of their machine nature --- AI is definitionally an artefact. To some extent, we might even argue that this principle is self-limiting. If AI really were to be able to alter what it means to be a machine, then communicating this modified machine nature would still meet this principle.

The principle of legal responsibility

The person with legal responsibility for a robot should be attributed.

Finally, the fifth principle communicates the robots’ status as artefacts in the most fundamental way possible.  They are owned, and that ownership must be legally attributed.  The fact that robots are constructed and owned is the reason I have previously argued that we are ethically obliged not to design or construct them to be psychologically or morally persons --- because owning persons is unquestionably unethical.  The argument is not that there exists person-like robots that we should demote in status legally, but rather that the necessarily-demoted legal status means that we should not cause person-like-ness to be a feature of any robot legally manufactured.  
However, the principles of robotics do not go to this extreme of futurism. As I said earlier, they focus on communicating the present reality to a population so eager to own and identify with the superhuman that they might easily be lead to believe that a robot badly manufactured or operated is itself to blame for the damage inflicted with it.  If you hear a horrible noise and find a car smashed into your house, you can quickly and easily identify the owner of the car, even if the car is presently empty, simply through its number plates or in the worst case through serial numbers.  The idea is that the same should be true if you find a robot embedded in your property.  The participants in the robotics retreat accurately predicted a problem now already present in our society because of drones, and one that is now being addressed in some nations with mandatory licensing such as the committee recommended.


To summarise, the EPSRC principles are of value because they represent a policy constructed at significant taxpayer and personal cost.  While no policy is perfect, ideally they should only be replaced by a new policy with an equivalently high or higher level of investment both by government and domain experts.  Their purpose is to provide consumer and citizen confidence in robotics as a trustworthy technology fit to become pervasive in our society.  The individual principles each represent substantial concerns of the experts and stakeholders, though sometimes that representation is itself not perfectly transparent.  The overall goal was to clearly communicate that responsibility for safe and reliable manufacturing and operation of robots was no different than for any other objects manufactured and sold in the UK, and therefore the existing laws of the land should be adequate to cover both consumers and manufacturers.
It is important to realise that this is not the case for all conceivable robots.  It is easy to conceive of unique works of art that qualify as robots and are not like commoditised products, or to conceive of robots that are simply built in an unsafe or irresponsible manner.  What people have more trouble conceptualising is that there may be cognitive properties such as suffering that might possibly be feasible to incorporate into a robot, but to do so would be as unethical as putting faulty brakes on car.  The principles of robotics do not seek to determine what is possible; they seek to communicate advisable practices for integrating autonomous robotics into the law for the land.

Update (3 April 2016)  The above is about the EPSRC Principles of Robotics.  In case you don't know and need a whirlwind tour, a short version of my own position on AI Ethics is:
  • Authorship yields responsibility.  We are obliged to make AI we are not ethically obliged to.
    • e.g. backed-up minds, mass-produced bodies. 
  • Intelligence is not necessarily human-like, in fact it’s unlikely to be stably human-like without physical human phylogeny.
  • The main threats of AI are empowerment of government & corporations; erasure of privacy, liberty, variation & therefore robustness.