AI Is Not a Unitary Actor: My Response to the UN Interim Report Consultation


This is my response to the UN's AI Advisory Body's open consultation for their Interim Report: Governing AI for Humanity. As with my response to the 2020 EU AI Whitepaper Consultation, I actually have three forms of response: 
Barbora Bromová 
  1. the document we're commenting on, with my annotations;
  2. the way I filled in their form (below); and
  3. Barbora Bromová and I are working on a longer article which will summarise the content points (and not be so much about this specific document.)
Barbora is one of my masters students at Hertie School, where she's presently visiting from Sciences Po. She is attending my course Governance and Politics of Artificial Intelligence.

Authorship: all the notations in the document linked above are mine, as is most of the text below. But this wouldn't have happened in time for the consultation without Barbora's help, she assisted with the editing, and she has also authored some good text which helped frame my thinking about the below submission (we were initially working in parallel), and will be in the forthcoming article.

 

Executive Summary

  1. AI is not a unitary actor. It is not unitary, and it does not act.  It is a set of software engineering techniques and digital services. Thus it is meaningless to discuss what AI will do, or to look for singular solutions about how to govern it. 
  2. Probably a more profitable enquiry for the UN's AI Advisory Body is this: How have digital service and product innovations altered what governments can do, and therefore governments' landscape of obligations? And how can the UN help governments through the transitions to best serving these new obligations? Note that every government will have different capacities and different threat and opportunity landscapes, so one size by no means fits all.

Submission to the consultation

Note: the subheadings were determined by the consultation, and each could take only 3.000 character responses. Our responses are a bit redundant because I expect they will be parcelled out to various subcommittees, since the document we're commenting on reads as if it had fragmented authorship.

After reviewing the Interim Report, please provide your feedback on the following sections:

(If you have no input for a specific question below, you may leave it blank)

Opportunities and Enablers

(Maximum 3,000 characters)


First, thank you for doing this work. These comments focus on what should be revised, but we appreciate too what has been achieved.


A primary concern for the document as a whole is consistency in correct attribution of legal and moral agency to those who control AI, through development, deployment, or use. Such attribution is essential for the perpetual iterative improvement of our societies that is the primary goal of all governance. Attribution concerns impact the opportunities of AI just as much as the risks. Maintaining a correct epistemology throughout the report is essential for ensuring moral and legal clarity, including avoidance of “ethics washing.”


We celebrate Box 1; it is a superb example of how AI can be used to create transparency in the most complex system: our ecosystem including its interactions with our economies and governance. Yet AI itself is presented as unknowable and almost ungovernable. Please apply the exact same Box 1 critical thinking to improving the transparency of governance itself–including of AI–as a second box.


Unfortunately, the present Box 2 lacks consistency and coherence in its framing of AI. To be honest, Box 2 could be deleted, but here are suggestions for improvement in case you do not. 


The heading ‘People-assistive AI’ is not only awkward phrasing, but also mishandles AI agency. Please use ‘Individual opportunities’ to be more parallel with the rest of the section.


At times this section (and others in the report) appears to reduce AI to ‘generative AI’, which presently remains an economically unproven and relatively unimportant subpart of all presently-active intelligent automation. See your own Box 1 for a nice example of a fuller range of AI being used for good.


The subsection on UN opportunities sets the UN aside as weirdly different from other public sector institutions in the previous subsection. Unnecessary distinction confuses the message and creates redundancy. Particularly given the claims to universality of the principles elaborated, the UN should only be set out as exceptionally as is clearly merited.


Similarly, AI is itself not so exceptional from our centuries of experience of automation. We know from this history that automation does not in itself undermine the dignity of human labour. Rather, automation alters the value of human skills. Rapidly undermining the value of some skills results in a need for active government support, such as for welfare, (re)training, and sectoral innovation. Note there is already evidence that the correct governance framework leads the same corporations that invest in AI to themselves also investing in retraining and retaining their own employees (Battisti, Dustmann, & Schönberg 2023). 


Economies may be slower at sufficiently recognising and rewarding which skills have become newly valuable. AI itself might be used to rapidly identify and adjust both wages and educational opportunities, ensuring adequate redistribution through wages. [2965 chars]



Risks and Challenges

(Maximum 3,000 characters)


Portions of this report use the term “AI” as if describing a unitary actor. This is wrong. AI is a disparate suite of technologies allowing us to improve logistics, management, and indeed governance wherever humans apply reasoning. Very few of the problems this report attributes to AI are unique to AI. Failures to address market concentration, lack of transparency in the expression of power, indeed all the forms of bias we train into machine learning – all are pervasive in our data because they are pervasive in our society. The lack of uniqueness to AI of the challenges elaborated draws into question its entire framing. For example does AI threaten language diversity more than the BBC, Le Monde, RT or Hollywood?


Some bullet categories in this section are overly abstract and clearly redundant.Yet the report largely neglects (though see Box 3's excellent "Individuals" list) known specific issues and challenges, such as declining local community identity and trust, loss of language or culture, and corrosion of moral, ethical, or legal traditions and institutions. Instead, purported universal values AI allegedly threatens are used to motivate protection under a ‘universal’ governance regime. Given the existence of significant jurisdictional diversity, the known importance of diversity to resilience and innovation, and the proven potential for firewalls to exclude services determined to be locally illegal, global consensus on many of the values enumerated seems an unnecessary violence against sovereignty. 


The seemingly-important yet contested concern of autonomous weapon systems (AWS) is described only abstractly, making no progress over decades of speculation. Given ongoing conflicts involving highly technically competent nations, by this point concerns about AWS should be motivated by specific, well-documented cases of deployment.


Please don't use social media as a punching bag. It has likely provided more value than harm to date through facilitating communication and government transparency. cf. many papers based on controlled studies. But also realise that wherever social media includes recommendation or search, it deploys AI algorithms. Social media are an important application of AI, not an external example. The EU's Digital Services Act is an important example of good AI governance.


Please delete the sentence including "the science of AI is at an early stage". This misinformation masks decades of study and excellent work. AI is a product, the transparency we need for governance is on how it is created and tested, as illustrated in the EU's AI Act. Explainability of AI products can reduce other reporting obligations, but not eliminate them. Similarly, claims of unclear liability are false. Responsibility for harms lies with whoever sold the product, unless they can prove it was used inappropriately. Deployers are free to pass liability on to suppliers, but are liable to their customers regardless of whether they can claim damages from the suppliers.


(3000 (may be trouble if form counts differently))


Guiding Principles to guide the formation of new global governance institutions for AI

(Maximum 3,000 characters)


As mentioned in the section on harms, problems there and here guidelines show the limits of the almost supernaturalist framing of AI as a unitary actor, rather than a suite of techniques. More useful for the report overall would be of AI as an extension of our human capacities of reasoning, action, and perception that alters what we can do, and therefore our obligations, especially as governments.


For most purposes, it is probably most useful to ask not “what are AI’s benefits and risks?,” but rather "how have our obligations and governments changed given the advent of AI?," What can and should we be doing for residents of member nations? 


This report should call for a process to identify the things that a) truly require global near consensus (e.g. wealth tax) and b) are facilitated by global cooperation (e.g. technical standards for AI transparency, redistribution to IP holders and other data sources). 


A number of deficits in the report’s coverage make parts of it appear almost a caricature power grab. For example, pretending we don’t already have more than a century of study on the limits and impacts of computation, that we don’t have decades of research on the impacts of digital systems on society. The failure to acknowledge and exploit outstanding years-long work done producing global consensus in the UNESCO recommendation on AI ethics, or the development of a large suite of digital legislation by the EU. By no means should the EU’s laws be applied everywhere. But we cannot pretend to value or fear intelligence while ignoring the utility of such work.


Specific suggested improvements: in GP1, #46, replace “AI” in the second line with “the full benefits of human technology and wealth.” Delete “through AI” in the second to last sentence. Replace “AI” in the final sentence with “global”. Add the sentence “These problems cannot be solved only for AI without being addressed more generally, but AI may be part of their solution.” 

Rephrase GP3: Data governance and the promotion of data commons should be built. GP4: delete “universal”. In #52 replace “universal” with “the widest possible”. Add a concluding sentence “However, veto players should not be tolerated. Rather, participation should be positively incentivised, e.g. through access to digital products and utilities, or participation in redistribution programs associated with their productivity.” The first sentence of #53 should be its own item, delete “But this is not enough.” In the new para, re the sentence including “growing awareness in the private sector for a…” cf. previous comments on jurisdiction diversity. There will not be VERY much splintering of the Internet, because deployers won’t choose to serve too small of markets, so members will aggregate and harmonise jurisdictions. But we must contest oversized actors' push to create a global jurisdiction that just happens to look like the one where they achieved unsafe levels of power.

GP5 #56 should be about UNESCO’s AI recommendation.

(2992)


Institutional Functions that an international governance regime for AI should carry out

(Maximum 3,000 characters)

Portions of this report – and of AI policy discourse more broadly – use the term “AI” as if it described some sort of unitary actor – a special new thing like a monster from space that needs to be managed by a world drawn together. This is wrong.


An entirely more useful framing than how to control a fictitious unitary actor would be that regulation of and through AI might well help humanity deal with all these social ills wherever they are present. For most purposes, it is probably most useful to ask not “what are AI’s benefits and risks?,” but rather "how have our obligations and governments changed given the advent of AI?," What can and should we be doing for residents of member nations? (Note: "resident" meaning "any human within a member’s borders" as per the UDHR. Citizenship is largely irrelevant to rights obligations as framed by the UN.)


What we should call for is a process to identify the things that a) truly require global near consensus (e.g. wealth tax) and b) are facilitated by global cooperation (e.g. technical standards for AI transparency, redistribution to IP holders and other data sources).


IF 1 #59 The first sentence isn't surprising because AI is a set of technologies, not a unitary actor. The second sentence is unlikely. Just delete. #61 “The extent of AI’s negative externalities is not fully clear, and cannot be, since they depend on both application area and chosen system architecture. The role of new digital technologies and applications in disintermediating…” Note: “Critical social impacts of AI” will not be unitary! Please don’t call AI governance instruments "observatories" like technology was a natural kind like the stars over which we have no capacity for control. Again, this is pandering to those who do not want regulation, and want you to think AI is opaque and unknowable.


IF 2: This has already been done by the UNESCO recommendation, we don’t need to do it again. IF 3: again, prefer this to be about encouraging harmonisation where tractable and dissemination of accessible known solutions for those who cannot afford innovation, but not too much stressing over "fragmentation". IF 4 is great! IF 5: this is mostly only relevant to generative AI (and surveillance). Be sure to avoid lock-in of established businesses. #67 agree wrt talent, not data & compute.


IF 6: #70 Replace “rogue system” sentence “The possibility of a rogue AI system escaping control temporarily cannot be entirely ruled out, and indeed arguably characterises airline pricing anomalies and "flash crashes", though to date such matters have been quickly resolved." DELETE #71. #72 is good, as is IF 7.


SF3: replace “models” with “uses”. SF 7: innovation is great where useful, but GOOD governance is the goal, and many good tricks are known and just not yet applied to AI.

SF 10: policy harmonisation is nice where sensible, not sure norm alignment is the kind of thing the UN should be doing. SF 14 silly but mostly harmless. SF 15 is essential!

(2976)


Other comments on the International Governance of AI section (aside from Principles and Functions, covered in above questions)

(Maximum 3,000 characters)


Please make much more usage of the UNESCO recommendation, which is in my opinion the leading extant effort, with the possible exception of the full suite of EU digital legislation, which goes well beyond their AI Act – include GDPR, DSA, DMA but also the (minor!) revisions needed for the product liability directive, the data act, the digital finance strategy. It’s not that this exact legislation makes sense for other jurisdictions, it’s that good work has been done in identifying concerns, which go much beyond those presently in this report.


In Figure 1 - Do not miss out algorithms! What is recommended how, who pays for and receives targeted advertising. Remember that we want AI governance to catch  or better prevent scandals like Cambridge Analytica, The UK’s post office scandal, or the Dutch benefits scandal. LLMs haven't caused that scale of harm and are unlikely to if they don't prove more useful than costly at some point.


#40 The need for binding rules is MUCH less debated than whether AI is really an existential threat. Please embrace the former, and not the latter. Similarly, I'm pretty sure it's widely accepted in law and political science that government addresses KNOWN harms. Therefore it is normal and acceptable that regulation should lag implementation.


Further lessons from the EU effort – don't let people try to solve universal or essentially non-AI problems by calling them AI problems. See for example the success at limiting how much text was needed about liability in the AI Act through minor modifications of the Product Liability Directive. I do believe we also need some true novel innovations of governance. Specifically,  how do we govern the transnational utilities many digital services have become? Unfortunately, theory development on utilities seems to have stopped in about 1980, as Chicago School economics cashed in on the plateauing of the Soviet economy, and with it the then-perceived economic failure of communism. China has since shown us that more governance innovation is possible, but China and the West share most of the concerns addressed in this report. All governments benefit from legitimacy and trust, and are challenged and disrupted by excessive inequality. Having said this about innovating transnational governance useful to AI though not limited to it, it is not evident to me that these innovations must be operated from some UN or global body. Rather, the UN might help coordinate the efforts of others.


Please note: this document is in reality coauthored by Joanna Bryson and Barbora Bromova.


(2473)


Any other feedback on the Interim Report

(Maximum 3,000 characters)

First, thank you for doing this work.


An enormous category error corrodes many (but by no means most!) of the report’s suggestions. AI is not some unitary actor. It is not an actor, and it is not unitary. It is a set of loosely related programming techniques, plus a variety of digital services that each deploy some aspects of those programming techniques. Sentences like "The full impacts of AI are not yet known" are nonsensical, because impacts are determined both by application areas and by the specific systems architecture chosen to implement those applications. We can expect ongoing innovation and extension of applications, the impacts of which will have to be individually checked as with all new products. We might hope to make architectural recommendations that will have some resilience and uptake, such as avoiding unnecessary storage of data or excessive use of energy.


This flaw in understanding could be addressed relatively easily and without excessive modification if instead of making this article about governing AI, we made it about governing in an age of AI. Many of the core motivating issues here addressed go well beyond the application of AI into wider matters of justice, rights, and sustainability. Yet AI could help us more successfully address those issues.


One of my favourite aspects of the document is the outstanding work in box 1, on all the ways AI could be used to resolve the climate crisis. Yet AI itself is presented as unknowable and almost ungovernable. In reality, AI is a lot less complicated than climate, and no more tangled with our economy. 


The other overarching issue is that too much of the text serves the interests of big tech and surveillance at the cost of diversity, resilience, and innovation in governance, and at the cost of national sovereignty. While I agree the UN may be uniquely positioned to help negotiate the really hard and essential matters such as transnational redistribution of value due to IP and data originators, or indeed wealth taxes on billionaires, in general I would prefer emphasis on respect for diversity of sovereign jurisdictions.  


Good governance requires investment. To date the transnational nature of AI and of billionaires seems to undermine adequate redistribution. This problem of wealth (re)capture may present a real role for the UN: ensuring adequate redistribution such that governments can meet these challenges. For example, the introduction of a global wealth tax on the most excessive individual fortunes, negotiating treaties for accurate governance including taxation of those generally AI-laced digital services that have become global essential infrastructure. The UN could also encourage members to perform appropriate within-country governance, for example by reinforcing those who demonstrate egalitarian application of the UDHR e.g. those who ensure that the benefits of healthcare and utilities – such as power, water, and information – are universally accessible. 


(2977)





Comments