AI and Human Rights – My Own Concerns, and Comments on a Confidential Document

My Concerns about AI and Human Rights

I've been thinking about this for some time, well, particularly since knowing I was going to be consulted about it. Here are my own concerns, and then I will give some feedback on a document I can't share, but the things I say below are I think important and stand alone.

Is AI directly associated with autocracy? If so, why?

Is it possible to have so much information about people as individuals – their predispositions and capacities – so widely available, and still create a governance framework that advantages collaboration and justice rather than power concentration and manipulation? Does the digital fundamentally alter the landscape of society into something that benefits genocide or any other type of silencing? 

There are three alternative answers to this that I presently see: 
  1. Power concentration happens whenever we develop new technologies that reduce the cost of distance; this always leads to volatility and destruction, then eventually the innovation of a new governance framework, e.g. Westphalia, Democracy, Antitrust regulation. IMO, we now need to be working on transnational regulation of antitrust, because digital (and aerospace) technologies are so easily global.
  2. Any kind of disruption (not limited to technological innovation) is like a landslip. The first thing that grows on the exposed earth are simple plants, often of a single species – weeds with shallow roots. They stabilise the earth, allowing a more complex and resilient ecosystem to follow.
  3. There really is something special about information transmission, and we will eventually (soon?) not have digital devices in our houses again.
These are not mutually-exclusive alternatives.

Is digital technology too much of a hazard?

Israel was famous for its use of AI to restrict civilian casualties despite having opponents that used human shields. Now it is using those same skills to maximise civilian casualties. Is the same true of any ethical use of AI? Will robot mine sweepers become smart, walking bombs? Will surveillance systems well controlled by a judiciary to only be used to spot terrorists and kidnapped children be deployed in parallel to track all human movement?

I hope that there is a means of ensuring how power is deployed – this is a matter of cybersecurity, encryption, governance, and trusted monitoring. We need to know we can blow up infrastructure like we can bridges if they are being abused by an enemy against us. We haven't eliminated police and militaries, we have learned to regulate these. I hope AI is the same, but we must be willing to expend resources on these solutions. 

But again, it's possible there is something special about digital technology. It is just too fast and too simple to upgrade and compromise. In that case, it cannot be attached to our critical infrastructure.

Valuing Humans

Finally, the most basic concern I've long had – that misconceptions about AI make people more likely to think that they don't need other people. People in power evidently have long had this opinion – that it's a better idea to lose a bunch of their young men than to deal with their neighbours, that's it will simplify their consolidation of power to target and eliminate a minority (or even sometimes majority). The idea that robots will take all the jobs is ignorant. Jobs are relationships between people; automation tends to increase productivity and employment, but disrupt wages which can again lead to political instability.

The best thing we can do to ensure that we value people is to ensure that we give people value. That is, we need to ensure adequate redistribution of the outcomes of our productivity, both through fundamental services and competitive wages. Humans love to compete, we are programmed by our biology to want to find a social role.  I just blogged about that too (in the context of "basic income").

Comments on a Human Rights Framing 

There must be an emphasis not only on what is not appropriate for the use of AI, but how to prevent it. For example, if we have cameras for face recognition of only those approved by court order, how do we prevent the usurping of that network? Can we? A second example: Israel was famous for developing AI that allowed it to target only terrorists under e.g. schools, yet now a new administration very quickly reversed that to create “Where’s Daddy?” – software for maximising civilian casualties, (a crime.)

I disagree on two aspects of the confidential document's description AI Action Summit – it was the first (not third) focussed on concrete action by governments and civil society, rather than “safety” produced by AI itself or commerce. Also, the main consensuses I saw were the need for regulation and for diversity – by "diversity" they meant addressing market concentration and digital sovereignty. I feel this progress happened because 

  1. Trump and Musk had moved so quickly to dismantle the US state, demonstrating the danger of excess power concentration, and
  2. China had allowed DeepSeek’s release only a week or so earlier. 
These two facts also allowed people to finally notice the number of solid, established AI companies from other economies – “the scales fell from their eyes.” Agree that it was problematic that the US and UK didn’t sign (nor did ⅓ of the other attendees, but less loudly.) But remember the context: less than two weeks later, the US backed Russia and the UK Ukraine in the UN, and while we were in Paris, the papers were reporting that the UK’s supposed nuclear deterrent was dependent on the US, and that the French nuclear submarines were being observed by Russian undersea drones. My experience of the British at the main summit was that they were terrified; perhaps they’d already heard Trump would back Putin at the UN. They still are terrified; their conservative government had already DOGEd their government, Labour seem to have no idea how to handle what levers of government are still attached to mechanism, nor how to replace the stolen or demolished mechanisms of government, despite their enormous mandate. [Note: I am British, I studied 5 years in Scotland, worked 17 years in England, and took a UK passport in 2007 to have an EU one.]

I share your strong support of the UNESCO document, cf my own blogpost on AI (global) governance I put together for the UN.

Although I put a lot of time into the EU’s AI Act, it is only one of the important suite of EU AI legislation, and perhaps the least important. Cf my article Human Experience and AI Regulation, written Spring 2024 despite its nominal date on Weizenbaum’s centenary.

You are too generous to AGI. Even Sam Altman, at the AI Summit, has said the term has lost all meaning. Humans routinely build machines that exceed our own capacities, including governments, universities, and airplanes. I’m working on a new article and book on these concerns, but search for AGI in my blog; [really, read the label "superintelligence".]

AI doesn’t “continue to evolve.” We continue to develop it. Given the confusion people have about intelligence and moral agency, it’s important not to speak metaphorically about AI. We need people to understand that AI is a technology we engineer, not a discovery of science.

I strongly support everything you wrote about your first theme.

I strongly support nearly everything you wrote about your second theme. With respect to governance, I believe it is essential that diverse and legitimate governments are the ultimate entities that express AI regulatory enforcement. 

  1. People must be able to express and coordinate themselves, and doing so along geographic lines makes enormous sense both pragmatically (saving time, increasing communication) and in terms of shared interests (similar threats and opportunities born of location and history.) I discuss this further below.
  2. Diversity and complexity are essential to avoid regulatory capture.

With respect to bias, do realise that bias is really just regularities, that is, information about the world. The real "problems" (actually, solutions!) are stereotypes and prejudice. Stereotypes are solutions because they are biases (regularities) we've decided collectively we don't want to continue – stereotypes are  a record of society's decision to improve itself.  And prejudice is acting in a way that perpetuates those stereotypes, so that is problematic, but identifying which behaviour is problematic is also part of the solution. See my explanation from when I published my 2017 paper in Science on AI "bias"

Opacity is a design decision, not a necessity for good-quality AI. In fact, choosing a strategy of opacity undermines a company's own capacity to improve its AI. I expect the AI Act to benefit corporations worldwide, as the GDPR did. That claim is not yet published, but the data that backs it is available, see below. The GDPR apparently did this by harmonising European digital markets – the access to the markets was the benefit, compliance has both costs and benefits. 

EU digital regulation is not a shakedown of the US and China. Rather, it's the construction of a market competitive with America's and China's, and benefiting anyone who does business in the EU.



Two other of my very related writings, plus a few more referenced in discussions:

Bonjour, Paris! Je m'inquiète de l'autocratie! Et des génocides!
(Actually, photo from February; I should get out and take a picture in Springtime)


Comments