My Concerns about AI and Human Rights
Is AI directly associated with autocracy? If so, why?
- Power concentration happens whenever we develop new technologies that reduce the cost of distance; this always leads to volatility and destruction, then eventually the innovation of a new governance framework, e.g. Westphalia, Democracy, Antitrust regulation. IMO, we now need to be working on transnational regulation of antitrust, because digital (and aerospace) technologies are so easily global.
- Any kind of disruption (not limited to technological innovation) is like a landslip. The first thing that grows on the exposed earth are simple plants, often of a single species – weeds with shallow roots. They stabilise the earth, allowing a more complex and resilient ecosystem to follow.
- There really is something special about information transmission, and we will eventually (soon?) not have digital devices in our houses again.
Is digital technology too much of a hazard?
Valuing Humans
Comments on a Human Rights Framing
There must be an emphasis not only on what is not appropriate for the use of AI, but how to prevent it. For example, if we have cameras for face recognition of only those approved by court order, how do we prevent the usurping of that network? Can we? A second example: Israel was famous for developing AI that allowed it to target only terrorists under e.g. schools, yet now a new administration very quickly reversed that to create “Where’s Daddy?” – software for maximising civilian casualties, (a crime.)
I disagree on two aspects of the confidential document's description AI Action Summit – it was the first (not third) focussed on concrete action by governments and civil society, rather than “safety” produced by AI itself or commerce. Also, the main consensuses I saw were the need for regulation and for diversity – by "diversity" they meant addressing market concentration and digital sovereignty. I feel this progress happened because
- Trump and Musk had moved so quickly to dismantle the US state, demonstrating the danger of excess power concentration, and
- China had allowed DeepSeek’s release only a week or so earlier.
I share your strong support of the UNESCO document, cf my own blogpost on AI (global) governance I put together for the UN.
Although I put a lot of time into the EU’s AI Act, it is only one of the important suite of EU AI legislation, and perhaps the least important. Cf my article Human Experience and AI Regulation, written Spring 2024 despite its nominal date on Weizenbaum’s centenary.
You are too generous to AGI. Even Sam Altman, at the AI Summit, has said the term has lost all meaning. Humans routinely build machines that exceed our own capacities, including governments, universities, and airplanes. I’m working on a new article and book on these concerns, but search for AGI in my blog; [really, read the label "superintelligence".]
AI doesn’t “continue to evolve.” We continue to develop it. Given the confusion people have about intelligence and moral agency, it’s important not to speak metaphorically about AI. We need people to understand that AI is a technology we engineer, not a discovery of science.
I strongly support everything you wrote about your first theme.
I strongly support nearly everything you wrote about your second theme. With respect to governance, I believe it is essential that diverse and legitimate governments are the ultimate entities that express AI regulatory enforcement.
- People must be able to express and coordinate themselves, and doing so along geographic lines makes enormous sense both pragmatically (saving time, increasing communication) and in terms of shared interests (similar threats and opportunities born of location and history.) I discuss this further below.
- Diversity and complexity are essential to avoid regulatory capture.
With respect to bias, do realise that bias is really just regularities, that is, information about the world. The real "problems" (actually, solutions!) are stereotypes and prejudice. Stereotypes are solutions because they are biases (regularities) we've decided collectively we don't want to continue – stereotypes are a record of society's decision to improve itself. And prejudice is acting in a way that perpetuates those stereotypes, so that is problematic, but identifying which behaviour is problematic is also part of the solution. See my explanation from when I published my 2017 paper in Science on AI "bias".
Opacity is a design decision, not a necessity for good-quality AI. In fact, choosing a strategy of opacity undermines a company's own capacity to improve its AI. I expect the AI Act to benefit corporations worldwide, as the GDPR did. That claim is not yet published, but the data that backs it is available, see below. The GDPR apparently did this by harmonising European digital markets – the access to the markets was the benefit, compliance has both costs and benefits.
EU digital regulation is not a shakedown of the US and China. Rather, it's the construction of a market competitive with America's and China's, and benefiting anyone who does business in the EU.
- Generative AI use and human agency 12 bullet blogpost about both law and higher education from late feb 2025
- One Day, AI Will Seem as Human as Anyone. What Then? Wired article from late Jnne 2022.
- Replika, and why AI ethics is a feminist issue also a blogpost about owning (or trading data for) products you believe are friends from April 2023, though I've been writing about AI "friends" or "companions" for decades.
- Any claims about regulation harming innovation should be backed up by data. Here are the data and claims that I made that in fact the GDPR has helped (unpublished) and the EU is comparable to China on innovation (published.)
- GDPR (unpublished) https://github.com/Vyhuyen/gdpr_firm_analysis/blob/main/Vo_thesis_GDPR%20impact.pdf note despite the filename, the code and how to use it to do the analysis is also in the left margin on that Webpage.
- Data on one class of AI patents defended in world-wide in WIPO, and the market capitalisation value aggregated across all such holders. This is from a 2021 publication, also linked at the bottom of the page. We also have unpublished data for more recent years and they only look better for the EU (except just after COVID), but that's not been put online yet, sorry. https://zenodo.org/records/5070275
- IRDT Annual International Conference: Artificial Intelligence and Fundamental Rights – Impact of the new European Union’s AI Act https://irdt.uni-trier.de/events/artificial-intelligence-and-fundamental-rights/?lang=en Sadly I haven't found online videos yet, but I know there's a proceedings in the works and I have asked.
- Yet more on why AGI is a bad way to think about people being inadequately responsible with the AI they develop:
- Of, for, and by the people: the legal lacuna of synthetic persons (other two authors are actual law experts on legal personality)
- Do We Collaborate With What We Design? (other two authors are actual moral philosophers of technology)
Comments