Are robot weapons less responsible than child soldiers?

This post is literally a letter home.  My dad asked me whether instructing an autonomous weapon was really that different from instructing soldiers like child soldiers.  I'm not sure, but here's my best guess.  Sorry for the lack of attribution; I may get to that later.

That's an awesome question -- I hardly ever get that good of question in talks! 
There are two answers:  one assuming the technology works, and the other one assuming not.  I'll do the second one first, because it's easy and probably temporary.  Right now, instructing a robot is nothing like training a person -- it doesn't see, think or act anything like a person, so it's hard for people giving it the instructions to do something similar.  So it's relatively easy with current robots to keep their "instruction" more along the lines of a machine you have to learn how to use, like a sophisticated car or phone.
However, we can imagine that robots become more directly instructable in the future.  What I love about your question is that it is really not about robots as much as about people.  As I understand it, only about 1 in 5 Americans shot anyone in WWI & II.  They couldn't bring themselves to do it, or were afraid -- I don't know.  Apparently we just are pretty sure somehow that they didn't fire their guns.  (I learned this some time ago, you may know more given your involvement in the peace fellowship.)
As I understand it, after the Korean war and the tactics used there against our soldiers, the American (and probably other country's) military became fascinated by the idea of brain washing.  Could you make it more likely that someone would do what you want?   Could you make the behaviour of a unit more reliable and predictable, and thus easier to plan around?  That's what basic training is about.  It's not just about becoming fit, it's about becoming very likely to follow orders.  You are indoctrinated intellectually and also physically coerced to be very responsive to your colleagues and your commanders.

However, in general we still want soldiers to think, and there is still a notion of responsibility for the soldier as well as their commanding officers. What a lot of the military philosophy about autonomous robot weapons deals with is how can we make robots responsible, particularly when there's no real way to punish them?
My own answer to that is that we shouldn't -- we should leave responsibility at the hands of the operators and the commanders, and as "operators" increasingly become commanders (if operating gets easier due to localised/limited autonomy), that doesn't change anything except that the commanders have fewer scape goats for their actions.  There are other writers of this opinion too.
So, what about child soldiers?  I think depending on their age, they also cannot be seen as responsible for their actions.  Similarly for any soldier that has limited freedom of action due to punishment that might be meted against them or their family.  I read about Serbian soldiers who refused to fight in Kosovo, asking their parents' permission first.  The parents gave their permission and accepted that their sons were returned to them in body bags rather than have them commit war crimes.  Obviously those people are heroes, but its harder to blame their colleagues for not doing the same than it would be to blame a "rogue" soldier that commits a crime on their own.  Yet even there, much of the behaviour at Abu Ghraib, while obviously morally wrong on the individual level, was also a consequence of people being put in positions for which they had no or inappropriate training and/or supervision.  So the people who created those situations were at least as culpable as the people who performed the actions, though again in this case this is in no way to say the people who performed the actions were not culpable.  More than one person can be at fault for the same action, and in different ways.
Which is all to say that absolutely, the deployment of weapons and of soldiers is the responsibility of commanding officers, and in that way robots are like any kind of solider.  But since Nuremberg and probably a lot earlier we have also held adult humans responsible for recognising and responding when orders are wrong.  Robots may even be able to assist with this as well, but my point is that ethically the buck should never stop with them.
One other interesting thing -- some ethicists are worried that precisely because you can expect, and in fact must expect robots to exactly obey rules and laws, that they may create hazards e.g. in driving by showing less judgement or "common sense" than a human would, e.g. by stopping suddenly to avoid hitting something rather than swerving slightly out of a lane even when there's no one coming the other direction.  Even if we could program the common sense in (which might not be hard), the question is how the law / liability would respond if something went wrong while a robot car was not perfectly within the law.  I suspect that robots will stay within the law, and humans will learn to adjust their expectations accordingly, and that autonomously-driving cars will be clearly identified just like learner cars are now, to make this all easier.
Joanna

They talked about the military use of robots that would be programmed to decide who or what to shoot with out human interaction. Is that very far removed from basic training for soldiers, especially in countries that use child soldiers? 
Dad


--
Joanna Bryson
http://www.cs.bath.ac.uk/~jjb/

Comments

Patricia Lowe said…
The problem for me is after the events. A robot can be decommissioned and will not attack without orders. A child soldier has his life affected at every level, with ramifications reaching far beyond his immediate actions - eg in making relationships, and far into the future.
Patricia Lowe said…
Above comment is from me!
Joanna Bryson said…
Hi, yes, I agree it's completely ethical to use child soldiers. Using children in war is an abuse of a child, but it's impossible to abuse a robot in that same sense, a robot has no human rights. It is possible a child may be more capable of avoiding doing war crimes themselves than a robot (particularly if the child is relatively mature), so it's *possible* that for the other people involved they would be less well off if a robot is used. But I think that is also very unlikely. But my post is not about either of those two points. The point of my post is that neither robots nor children should be held responsible for their actions in war.