Two things happened yesterday that seem worthy of putting together and clarifying.
First I published a tweet correcting an article in tech review. Here's the article:
Listen: Many devices with AI are agents: that is, they alter things in the real world. Alarm clocks are also agents, so are most chemicals. They effect change. Legal persons have responsibilities and face sanctions under human justice. Human justice does not have impact on an algorithm. Moral agents are things we hold responsible for their actions. AI agents, like alarm clocks, should not be held responsible as humans are, rather they are the responsibilities of their owners, operators, and manufacturers just like any other artefact.
AI is not your friend. It is something owned, designed and manipulated by other people or corporations. In fact, AI is best seen as extending the corporations that own it into your own home, business, and phone – into your own life.
First I published a tweet correcting an article in tech review. Here's the article:
My tweet had a typo I correct here:
We should *correct* algorithms & *hold those employing them* responsible. #AI agents are not legal persons nor moral agents.
In case you are wondering about my reasoning there, here are a few sources to hold you while I work on my next paper on the topic:
- A recent article by S. M. Solaiman, Legal personality of robots, corporations, idols and chimpanzees: a quest for legitimacy, lays out why humans, corporations, and even idols can be legal persons, but chimpanzees and AI cannot. Basically, chimps and AI both have half of what legal agency need: a capacity to suffer (chimps) and a capacity to understand legal obligations (AI).
- The above is one of five reasons I give for not othering AI in the linked blogpost (and my upcoming paper)
- A more complete blogpost on holding employers accountable at least for taxes: Robots are owned. Owners are taxed. Internet services cost Information.
- And one on the responsibility our authorship gives us over AI: Robots are more like novels than children
Second I read this very sweet blogpost by a smart person at the meeting I'm attending: Will AI Need Art.
The desire to have AI share in our aesthetic is a desire to have it share in our identity. This suddenly for me shifts my entire understanding of the problem of over-identification with AI. I always saw that as a mistaken but well-intentioned projection onto a passive recipient of the gift of rights. But suddenly I wonder if the ultimate (note: not Will's deliberate!) cause of this tendency is to embrace perceived powerful actors into our own community. In other words, the tendency to identify with AI is actually an attempt to draw a perceived powerful actor into the ingroup. So this is not a benign misperception, but rather an active groping for security.
The desire to have AI share in our aesthetic is a desire to have it share in our identity. This suddenly for me shifts my entire understanding of the problem of over-identification with AI. I always saw that as a mistaken but well-intentioned projection onto a passive recipient of the gift of rights. But suddenly I wonder if the ultimate (note: not Will's deliberate!) cause of this tendency is to embrace perceived powerful actors into our own community. In other words, the tendency to identify with AI is actually an attempt to draw a perceived powerful actor into the ingroup. So this is not a benign misperception, but rather an active groping for security.
AI is not your friend. It is something owned, designed and manipulated by other people or corporations. In fact, AI is best seen as extending the corporations that own it into your own home, business, and phone – into your own life.
That's my bottom line. We are empowered by AI, but so is everyone else, particularly those that own it. We are extended by ICT more generally, and so our autonomy as individual humans becomes blurred, and everything from privacy to responsibility to government revenues becomes confused.
Comments