Bot authoring is not tool building

There has been a lot of discussion about Microsoft's errors with Tay.  I hadn't felt it necessary to comment beyond tweets, because although Microsoft certainly should have known better – the established state of the art was of a higher standard – they took ownership of their mistake.

This is in striking contrast to Tony Veale's presentation the AAAI Spring Symposium on Ethical and Moral Considerations in Non-Human Agents, where he insisted that anyone who thought he was responsible for his twitter bots didn't understand agency.  As my own presentation (which sadly Veale did not attend the meeting, he only skyped in for his own talk) explained, moral agency is a strict and minuscule subset of agency.  As I've argued frequently in this blog (most recently in a post on the UK's Principles of Robotics), it's not that we can't conceive of a legal or moral system where AI is a responsible agent, it's that keeping AI as a human or corporate responsibility is the best option both from the perspective of human society, and of any potential (so far unbuilt) AI that might suffer due to its unequal relationship with its creators.  We're obliged to make AI we are not obliged to, and while individuals may violate that obligation, governments and societies can refuse to condone that, so that such systems would never be legal products.

What's made me blog is a tweet by an AI colleague I greatly respect, Alex J. Champandard, founder of http://aigamedev.com/. Alex said:
Blaming bot authors for those interactions is a surefire way to halt progress in Creative AI. You don't blame Photoshop for what users make!
This is interesting, because I believe Alex is confusing an authored artefact (the bot) with a tool for authoring artefacts (Photoshop.)  And I can see why this confusion might be made, since one way to phrase my own claim in the second paragraph above is that AI should be seen as a tool.  But I don't think "tool" is the right metaphor for AI artefacts.  AI itself is a tool, but we use it to create intelligent prosthetics which proactively pursue goals we have determined for them (either directly, or by determining how they will determine their goals).

An AI artefact is indeed an agent; it changes the world.  Intelligence transforms perception into action; that's how it's defined, and agency is the capacity to change an environment.  Moral agency is being responsible for that transformation.  Chemical agents are never morally responsible, 2-year-old children are only responsible in limited circumstances.  I firmly recommend that AI bots should also never be responsible, though ensuring this requires an effort of policy.  It also requires powerful companies and individual trend setters to show leadership.

So therefore (unusually) today I am very happy to support Microsoft, because they stepped up and said:
Although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack. As a result, Tay tweeted wildly inappropriate and reprehensible words and images. We take full responsibility for not seeing this possibility ahead of time. We will take this lesson forward as well as those from our experiences in China, Japan and the U.S. Right now, we are hard at work addressing the specific vulnerability that was exposed by the attack on Tay.
And I must respectfully but strongly disagree with my friend Alex.
  1. It doesn't limit creativity meaningfully to take responsibility for your creations.
  2. Doing so with twitter bots is within the established state of the art.
  3. Artefacts we create with AI embedded are not the same as products we sell to allow others to make such creations.  Had Tay been a bot-creation tool and 4% of people created abusive bots, then we might condemn those 4% of people, but still talk about regulating the tool.  But Tay was a single bot many people were encouraged to interact with, and as such strictly Microsoft's creation and responsibility.
5