I came into AI ethics via observing the over attribution of human characteristics to intelligent systems – principally where those systems happened to be in some way humanoid e.g. a robot with human-like form. In decades of trying to help people get clarity about AI, I've come to two conclusions concerning ethics and justice.
The first is that the best way to think about ethics is as the entire set of behaviours that help a society perpetuate a form of itself into the future. So by this, security is essentially ethics, though there are other aspects to ethics that include identity signalling matters like dress and language, but identity too is key to a society's existence. Some ethicists used to strongly criticise this system as necessarily moral relativistic but it's not. You can still say your society is more ethical than another's, you just have to specify by what metric. Nowadays this definition has been widely adopted. In fact I now often hear people say "if a company is talking about ethics they are trying to avoid legal oversight, because everyone knows ethics varies by society. If you really want to be moral you would be talking about human rights."
The second is that systems of justice only work if there is relative equity. In fact, we know that economies work best with not perfect equity, but rather a Gini coefficient of .27 (0 means everyone has the same amount of money, 1 means only one person has it all.) The reason that we need to be nearly equal is that this is the only way to harness our mutual strength and ensure justice is enforced, see further the theorising of Paul Gowder. The reason we need a little inequity is probably due to our need for motivation, or to get some extra resources to those that are providing the most useful contributions at the moment.
From these principles we can draw the conclusion that the only way to ensure ethical or moral application of LAWS is to ensure that these systems are transparent and are used transparently. This doesn't mean that we need to forbid the use of complex learning algorithms, though the extent to which "self learning" takes place is hugely exaggerated, we do not actually have AI forming entirely novel and incommunicable concepts, particularly not whilst in flight. At any rate, the use of machine learning is a well understood procedure of systems engineering, we can keep track of whether best practice and due diligence are applied in that as in any such process. If anything, the digital arena makes it easier to capture and analyse information about development, deployment, and operation.
Therefore it is essential that we think of LAWS and indeed all AI as extensions of the individual or corporate entities that own and operate to them, though also in other (quite conventional) ways of those that initially develop them.
I want to close this initial round with an example of the kind of deception that is being played out in this space. I had two different individuals from the same leading political party of a NATO partner nation tell me what was clearly a talking line they'd been served – that we should worry because our American partners were not able to access the best AI available due either to data restrictions, population numbers, or activists refusing military work. I've also heard and refuted such stories many times before, but what was striking in this case was a story about how Chinese facial recognition was so good that a drone or other missile could from the face compute & pierce precisely the heart.
The level of face recognition skill needed to guestimate the location of a heart we have had for decades and could probably be assembled in a high school today. But as Heather Roff says, military actors do not actually looks another individual in the face, acknowledges them as mutual moral agents, then shoots them through the heart in some perversely dignified way. If this was ever what war was about – which is unlikely, even chimpanzees "dechimpify" their opponents in sectarian conflict – it hasn't been for hundreds of years.