It's (still) not about trust: No one should buy AI if governments won't enforce liability

Yet another government agency has asked me how to get people to trust AI. No one should trust AI. AI is not a peer that we need to allow to make mistakes sometimes but tolerate anyway. AI is a corporate product. If corporations make understandable mistakes they can be let off the hook, but if they whether with negligence or intent produce faulty products that do harm they must be held to account.

picture of a coffee house with the title text on a sign in the corner
No we don't have wifi – talk to each other
(Atopia Kaffeehaus, Berlin)
Hipsters help the economy too.
What governments really want is for people to buy AI or use it in their services – they think that would be good for the economy. But what is the economy? What we need to care about is our national security and the wellbeing of our citizens. Breaking the legal system is not going to benefit the economy. Here are two things that would break the legal system:

  1. Allowing corporations off the hook because people over-identify with AI and think it's a person. 
  2. Saying there is no product liability for AI. AI is a product; it's an artefact. Even if it's being used by people as a tool to perform services, that doesn't magically make the AI itself a service.

I was asked "What if a consumer reads a newspaper story about a fully autonomous lawnmower that learns something new then goes to a neighbour's yard and burns down the shed. What if the neighbour has trouble getting compensation. Would that inhibit uptake of AI?" Well, yes, obviously, but this is not about trusting robots. This is in no way different from your dog knocking over the neighbour's barbecue and causing a fire, or something from your roof falling on your neighbour's car. Such an event would most likely be a clear defect in the lawnmower. If corporations that make and sell such defective products aren't held to account by the government for the damage, then people shouldn't buy that AI. 

Or maybe the problem is that the owner didn't train the lawnmower properly, and the owner will be held liable for the damage. In that case someone reading that story might think "I'd better not buy that lawnmower, it might be too complicated for me too." And they'd be right!

The goal of the government should not be to bankrupt citizens in order to have more AI than we can safely handle around the place. If a government wants us to "trust" AI, then they can start by making it far more apparent when they do succeed in protecting our interests. They should also protect our interests harder.

And yes, I'm still angry that the UK found Facebook liable for damaging our elections, and then only fined it £2,000,000. A government can ask me how to convince people to buy AI after that government has demonstrated that it can protect its citizens' interests.

Marisa Tschopp blogged about trust today too (the same day!) Maybe we both went through the same interview... Anyway, nice piece Three wrong questions about trust and AI