Autonomous robot weapons in warfare


Also in the news in the last few days -- my colleague Noel Sharkey has made the headlines warning about autonomous weapons. Obviously robots are open to the same problems we've seen in prosecuting the crimes at Abu Ghraib. The people who set the policies that result in robot use must be held responsible for their effects, not the robots or the technicians that run them.

But my larger concern with automation is that it spares the lives of our own soldiers at the cost of civilians in the countries we invade. Both the US & UK voters seem to care a lot more about the very few military casualties we've suffered than about the hundreds of thousands of civilians who have died compared to those who would have even under the previously-considered-inhumane regime of sanctions. So as we reduce our casualties further, what will stop us from inflicting more damage in the world?

Some people claim that robots will make fewer mistakes than humans since they aren't given to fear and rage. I consider this unlikely, since robots are prone to sensor error and faulty programming. But my previous argument holds regardless of whether the individual robots cause more or less damage. The problem is will we be more or less likely to start &/or prolong wars if we have robot soldiers? I'm sure we will. I'm sure the response to Vietnam kept us out of a lot of such wars for thirty years, but we've been gradually ramping up again since Reagan was president, which was an explicit strategy of "desensitization" of the public to war, starting with Panama & Granada.

I know war can sometimes be necessary. But I also know pressures like the financial impact of the military-industrial complex can make war occur more frequently than it would otherwise, which probably means it is occurring more than necessary.

Comments