Notes on Security, Terrorism, and AI / Digital Technology


 I've been asked to speak to the European Commission's Radicalisation Awareness Network (RAN) on "Ethical challenges for governments in using new technologies in preventive efforts" (I'm just one person on a panel of three talking for 10 minutes). I needed to write out my notes somewhere I could read them, and thought you (dear reader) might be interested too.

  1. Just to introduce myself, I'm an academic, with expertise in both natural and artificial intelligence, particularly systems AI. I'm not trying to sell anything, though I am working to understand how the increased transparency the digital revolution affords can be used to increase accountability rather than confusion.
  2. I want to start from the very basics. AI and blockchain are not magic, they do not ensure or secure truth or fairness. They are technologies built and used by humans. What they record is what we put in them, where `we' can include hackers, developers, funders, even society at large and our implicit biases if machine learning is used in the course of development.  AI can still help – we increase our certainty about events as we get more independent sources of knowledge about them, and AI and digital technologies can be used to help "connect the dots" not only by doing AI search, but also simply by making it easier to keep track of things. IF a good system is designed, put in place, and cybersecured from the ground up. Systems engineering, and cyber secured records of systems' engineering, are critical to the ethical and legal application of AI.
  3. Another baseline I need is the difference between war and policing. I'm sure there are many, but the one I'm most aware of as a roboticist and ethicist is that only in the context of war are we allowed to kill, unless our own lives are in danger. For this reason, no robot can be used to kill outside of the context of war, because it has no life to endanger. Data in a digital system can always be backed up and/or transmitted. The portability and near perfect replicability of data is precisely why the digital age is such a challenge to governance. The reason though that I wanted to mention this baseline here, is that my understanding is that fighting terrorism in Europe is either policing, or possibly a very brief declaration of war where ordinary law is very temporarily suspended – a state of emergency.
  4. This leads me to the first point I want to make about using digital technologies and terrorism in the context of Europe. I view it as absolutely essential that the EEA continues to be regulated as a distinct, harmonised region. Although digital communication unites the world in ways we haven't previously seen, so did the horse. We have to adapt our governance, but not abandon it. We have different possibilities and different threats because of our geography, our economies, our cultures, our history, and our institutions. It is critical to our security, and it may well benefit the rest of the world's, that we continue to stabilise our high standards of defending fundamental rights. Many of these lead to a population with high levels of wealth, education, and social mobility from which we can derive economic and strategic strength. And the nature of these strengths is that they can benefit the world, as we've seen not only with the GDPR but with our COVID vaccination policy. We are the only global region with roughly equal expenditure on exports of vaccinations–including to healthcare workers in poor countries–and vaccinations of our own citizens. We should strive not only to be peer to larger nations, but also to other collections of small-to-moderately-sized countries that might find economic and regulatory power through coordination and harmonisation of countries in their region.
  5. Given the importance of our dedication to fundamental rights, I'm worried that the biometric technologies "bans" in the AI Regulation include exceptions for both kidnapping and terrorism. I can believe that just as with COVID there can be times when we should all stay inside and perhaps hide our faces because of an ongoing security threat. But we cannot allow such exceptions to fundamental rights to be made for anything but an extreme emergency, when automatically revokable emergency actions can and should be taken. It needs to be clear that losing an election for example is not such a state of emergency. Unfortunately, neither is a kidnapping, however how `high profile' the case.  Kidnappings occur every day, mostly by parents unwilling to accept custody decisions. We need to clarify what an emergency is, and how extraordinary these are. It's possible that the ban should be made complete.
  6. Related to the political and practical likelihood of complete such bans of technology use, we need both regulators and technology dedicated to recognising the misapplication of AI technology. We need to proactively "sniff" for organisations that are using illegally retained personal data or other proscribed technologies, and then act on it when detected. This needs to become one of our repertoire of state tools to defend our citizens' rights, just like environmental or consumer agencies. What is illegal–such as personal data retention proscribed by the GDPR–is still going to happen. We need to prosecute those who act on information they should not have.
  7. This leads to my final point. The digital can (as we say in America) turn on a dime. You can delete digital data in an instant, unless it is well backed up in a thorough and distributed manner. This can break justice as well as damaging science, culture, and history. You can also suddenly own others' data through a successful cyber assault. Or an election. A new government can get elected or indeed invade that does not have the same views on fundamental rights as the one that was entrusted with the data. As I understand it, a big reason that so few Jewish Dutch people survived Nazi occupation compared to Jewish Belgians is that the Dutch had better-organised data. Note that–as that implies–this is not only a digital problem. As I understand, ISIS was also remarkably thorough about documenting who was in power–and who those powerful people cared about personally–in every village they invaded. These records were kept entirely physically, on paper, and were used to great effect. The differences with the digital is that you can gather data very quickly, and it is easier to have leak undetected.  May I also point out that Johnson & Trump inherited the number two and number one ranked pandemic preparedness plans as ranked by the WHO, and threw them out the window. That's democracy, or it can be.
  8. One solution that hasn't come up yet [by the time it was my turn to speak] is that one solution to the problem of big giants is antitrust. Two questions that weren't answered by the GAFAM speaker yesterday are:
    1. Who has oversight over efforts to alter behaviour? Isn't doing so without consent about to be illegal in the EU under the new AI Act?
    2. How do we ensure that if we learn to modify "radical" behaviour successfully on social media through these techniques, the same techniques aren't applied in other domains?
Other blogposts you should perhaps be interested in:
feet and foot prints

Comments