Human Experience and AI Regulation: What European Union Law Brings to Digital Technology Ethics

You can now ignore the below unless you are interested in the writing process or something: the final version is here and much more lovely.


https://ojs.weizenbaum-institut.de/index.php/wjds/article/view/3_3_8/111







This is the submitted version of an article accepted by and soon to appear (in copy-edited form) in the Weizenbaum Journal of the Digital Society I'm publishing this scruffy preprint early because of the last-minute debate about the EU's AI Act. Note that the second section is titled "Why the EU?" and the third "How EU Legislation Works Towards Human-Centred AI"


Human Experience and AI Regulation: What European Union Law Brings to Digital Technology Ethics

Joanna J. Bryson

Hertie School, Centre for Digital Governance January 25, 2024


Solicited for the Centenial Issue

Abstract

Although nearly all AI regulatory documents now make reference to the importance of human-centring of digital systems, Artificial intelligence (AI) ethics itself is frequently reduced only to concerns of bias and perhaps power consumption. While both are of immense importance—altering human lives and our ecosystem—the ethical and regulatory challenges and obligations relating to AI do not stop with these. Joseph Weizenbaum himself describes not only the potential abuse of intelligent systems for making inhuman cruelty and acts of war more emotionally accessible to human operators, but also the necessity of solving the social issues that facilitate violent acts of war, and the potential of computers to be used for this good purpose. In this paper I review how the EU’s digital regulatory legislation may well help to address all these concerns, should they be well enforced. I begin by reviewing why the EU is leading in this area, and the legitimacy of its actions both regionally and globally. I then review the legislation already protecting us: the General Data Projection Regulation, the Digital Services Act, and the Digital Markets Act, and their importance to achieving Weizenbaum’s goals. Finally I discuss the nearly-promulgated AI Act, and conclude with a brief discussion of the potential for future enforcement and more global regulatory cooperation.


  1. Introduction: The Present Technological Context

Almost a quarter of the way into the Twenty-First Century, we suddenly face the world of Joseph Weizenbaum’s nightmares. AI conversational agents are seemingly everywhere, being attributed all sorts of powers—sincerely or insincerely—by people with varying degrees of knowledge concerning AI, let alone (prior) experience of it. Moreover, these attributions come from people with very varied motivations. Some are there to sell, others to buy, some to regulate, others to evade regulation or taxation. Some are hoping for a new form of progeny; some are trying to end the phenomena of death; some are looking to replace the human species. Most of course are just looking for some combination of efficient productivity and personal entertainment, but this does not obliviate the hazards of moral confusion over AI.

These may be the least of our planet’s problems. We are suffering also mass civil displace- ments and deaths including but not limited to contexts of both offensive and civil wars; an escalating climate crisis; further ill health and biodiversity collapse deriving from many pollut- ing and otherwise unsustainable industrial and consumption practices; and finally a geopolitical contraction of trust. The expansion of powers our innovation has given our species places us seemingly in an endless cycle of needing to further augment our collective intelligence. Human- ity and the rest of our world’s ecosystem needs us to produce substantial, radical, sustainable change as quickly as we can without creating further devastation. Worldwide peace and equity were also the largest AI-ethics concerns of Weizenbaum’s, particularly the abuse of computers to amplify emotionless destruction via intelligent weapons systems, but also their potential utility in solving resource scarcity and redistribution (Weizenbaum, 1986).

While we seek to understand the landscapes of solutions and problems generated by in- novation, we must remember that AI is not the only new source of intelligence. Much of the increasing pace of change might be attributed to our ever-wider access to fellow humans. From 1918–2018, we have moved half of humanity out of extreme poverty, dropping the proportion from 60% to 10% at the same time that the absolute number of humans was quadrupling. Almost half that percentage drop happened since 1995 (Roser, 2021). Simultaneously, we are widening access to both education and information communication technology (ICT)—as of 2023, over 65% of people have some access to the Internet (DataReportal et al., 2023). One powerful form of ICT is social media, some of which provide important new ways to commu- nicate and collaborate between experts and peers who might otherwise never have discovered one another. Further, those of us lucky enough to have sufficient bandwidth or local compu- tation available now have access to startlingly accurate translation. Physical transportation has also become increasingly affordable. All this opens windows to insight in ways never before imaginable.

Yet while education, transport, and access to information are entangling and enhancing minds in ways we can barely conceive, wide swathes of humanity are losing familiar liberties. This is due not only to increasingly pervasive surveillance, but even worse to a hardening of governance styles. The many governments that now seem ready to both acquire and express enhanced capacities to surveil become hazardous even if they presently use these skills benignly. Domestic autocratic consolidation of political power can be achieved not only through eliminat- ing or undermining political opposition figures, but also by disrupting political organisation, and by reducing the life chances of “wrong-thinking” individuals, such as academics studying topics presently considered dangerous to a regime. Internationally, long-used strategies of propaganda and other means of interfering in the affairs of competing (and even cooperating) states have seen escalation. Advances in AI make it easier too to identify those in foreign coun- tries susceptible to your influence, and to model outcomes of such interventions, including on elections.

There is also though substantial reason for hope. Globally, both greenhouse gas emissions per person and the number of persons seem to be levelling off, and may even soon decline somewhat.1 We have largely healed the hole in the ozone layer, we are increasingly able to treat diseases including cancer, and as mentioned earlier education and equity are both showing positive trajectories overall. The same technologies improving governments’ (and other organi- sations’) capacities to surveil and repress can also be used for all other applications of informing and control. Such applications include the increase of justice, representation, and democratic expression. In many jurisdictions (including some we categorise as autocracies), digital tech- nology has been used to simplify access to government services, including the reporting of problems. We see on a global scale increasing innovation of and accessibility to commercial digital services including email, video conferencing, and automated search. There ought then to be means to ensure that communication technology is used for creating transparency—or as some now call it, legibility (Pilling et al., 2023)—for ordinary citizens, allowing us to understand the world around us, or at least the actions of our governments, corporations, and AI systems. We should not only be able to understand the intelligent technology we use, but be able to use it to help ourselves work collectively to regulate our ecosystem, economy, and security more generally.

If we start from a functionalist position that an ethics is the set of behaviour that maintains

some society, then we can see the problem of maintaining the ethical use of technology to be one of governance—a means by which a society deliberately regulates itself (in contrast to externalised forces such as starvation) and ensures its own self preservation. There is at least by treaty global agreement that ethical outcomes require each nation to not only respect but actively defend the fundamental rights of all humans within its borders (including positive rights such as employment and health care Nations, 1948).  More recently it has also been

 

1Note that there is no reason to take such successful regulation as an indication of impending extinction (Roser, 2023).

agreed (also at the UN level) that such universal defence of human rights both mandates and is mandated by the goal of achieving ecological sustainability (United Nations General Assembly, 2015).

The largest single—or at least harmonised—jurisdiction presently trying to legislate and enforce a rights-based digital technology ethics is the European Union (EU Bradford, 2023). In this article, I have already established the basic motivation for why the EU (or indeed, any polity) should want to regulate information technology. I will now discuss why and how the EU has come to be doing this regulation. I will then return to the more Weisenbaum-related question of ensuring that people can understand their AI systems, and be defended against their misuse, including the deployment of anthropomorphism-based deceptive tactics. I will in particular emphasise laws already being enforced: the General Data Protection Regulation, the Digital Services Act and the Digital Markets Act, though I will also examine briefly what the nascent AI Act contributes. In short, Weizenbaum might be proud: the one requirement for all AI in the EU AI Act is that it should all be clearly identified as such to its users. But the biggest Eliza question—will the users understand the implications?—is perhaps better solved by the earlier, existing legislation.


  1. Why the EU?

I said just above that the EU is the largest jurisdiction presently trying to legislate and enforce a rights-based digital technology ethics. The reason for the caveats is that there are several other pretenders to this title. The EU is larger by population but not GDP than the US. However, the US is not trying to legislate or enforce technology ethics, rather it is trying to encourage the digital sector to conform to some standards voluntarily. The EU is (or has recently been) larger by GDP but not population than China, and China is working actively to legislate technology governance. However, China’s focus on rights is limited by its larger focus on stability and security. Fundamental rights and government legitimacy are seen as essential only to the extent that they serve this primary goal. China’s argument is structurally the same as “put your own oxygen mask on first:” without a state, there is no one to defend individual rights.

Europe’s greater focus on individual human rights is at least partly an outcome of many horrific centuries of war. These so far seem to have culminated with the Twentieth Century, during which mass killings were more likely to be effected against you by your own state than by somebody else’s (Rummel, 1995; Valentino, 2004).. Here we talk not only about death camps or death marches, but also about policy-driven starvation, often under the guise of collectivised farming. Mao and Stalin both managed to kill more people than Hitler absolutely, and sadly many other countries managed to kill a higher proportion of their own residents2. The European Union, although set up as a trade organisation, not a security one,3 was explicitly designed to bring an end to wars within Europe, particularly between member states. The EU has been viewed as sufficiently successful in this goal that it was awarded the Nobel Peace Prize in 2012.

However, the European focus on human rights may not be a simple consequence of memory of collective trauma—which sadly would be shared more globally—but also a reflection of strategy. The EU has roughly 20% of the world’s GDP4, yet less than 6% of its population. Investing relatively heavily in each individual may therefore be any combination of a strategic necessity, a winning economic strategy, or the luxury of a wealthy region. Investing in the well being of even minority populations certainly seems to be an essential attribute of strong democracies, though here to the direction(s) of causality may be complex (Rovny, 2023; Gibler and Owsiak, 2018).

 

2Following from the Universal Declaration of Human Rights (UDHR), I focus on residents here rather than citizens to avoid questions of which individuals ‘should’ have citizenship. The UDHR creates a world wherein every individual is owed the protection of at least one state: whichever state they are standing in at the moment. Assuming of course that there are no failed states—that all territory of the Earth has some responsible government. 3EU security is broadly though not entirely guaranteed by individual member states’ membership of NATO, a

partnership that presently includes the US.

4Estimates vary; the International Monetary Fund said 22% in 2019.

Most people who question why or indeed whether the EU should be regulating global technology focus not on the EU’s internal motivation, nor indeed on their (globally-shared) mandate under the international convention. Rather, the question is why a region that has no leading AI companies (where leading is defined by size) should be the one that regulates AI. If we shift that question to be about whether the EU itself has competence in AI, then in fact it does. The EU not only produces more AI PhDs than any other comparable global region, it also produces comparable numbers of WIPO-defended AI patents to China (Bryson and Malikova, 2021; Dorfs and Bryson, 2024), and further, the aggregate market capital of the companies that hold them is comparable to the aggregate market capital of the (more concentrated) Chinese companies. Interestingly, the rest of the world (excluding the US) outweighs the sum of the EU and Chinese capacities on both of these metrics, and the US dominates all other countries combined. Bradford (2023) portrays the China, the US, and the EU as three possibly-overlapping empires of AI regulation, which she frames as hardware-driven, market-driven, and rights- driven respectively. Another framing might be surveillance autocracy, surveillance capitalism, and privacy. The problem with any form of surveillance is that the information once stored can be accessed. Governance styles are not necessarily permanent, and indeed large, monopolisable power structures or resources may encourage democracy.

This brings us back the question of why the EU does not have very large individual corpo-

rations generating its AI. The reason is firstly because an early democracy, the US, innovated a legal practice called antitrust towards the end of its difficult first century. The US—after much debate—came to the conclusion that too much concentration of power could undermine a democracy’s capacity to govern (Wu, 2018). Antitrust law is intended to ensure that those with dominant positions in a market do not unfairly exploit those advantages to further undermine competition. Badly-behaving (or perhaps just overly large) companies should be disaggregated, or “broken up.” The ideal is that markets should be able to set fair prices and ensure good corporate governance through open competition. Where it better serves the public good to have a single organisation operating at scale, then the market’s capacity to regulate prices and corruption has to be replaced by extra regulatory attention from the government. This is the case for utilities, such as telephones and electricity, and probably also for some categories of digital services.

The other, second reason why the EU has antitrust law is because it was imposed by the US on Germany (and Japan) following the Second World War. The wars were seen as having been caused at least in part by facilitation of dictators by overly-powerful single companies. Those companies were broken up under the direction of the Allied forces, and the constitutions of the offending countries altered to ensure that antitrust regulation kept the situation from recurring. The EU largely retained though somewhat adapted German competition law, and indeed there are occasional conflicts between Germany and the EU on antitrust. But the real question then is why there are such large digital technology companies in the US. Although often attributed to simple network effects in digital ‘platforms,’ there was in fact a deliberate relaxation of the managing of corporations’ scale. The ‘Chicago School’ of antitrust or competition law was first popularised in the late 1970s, just when the Soviet economy peaked (though see Miller, 1962). This school of thought, assuming that only consumer welfare as measured through consumer prices is a suitable concern of government, gradually ascended in power until it was conspicuously first applied at the settlement phase of United States v. Microsoft Corp. 2001. The decision not to disaggregate Microsoft marked the triumph of the Chicago School in the US. The US has since even sought to block the EU from enforcing the merger laws the US had initially demanded Germany implement.

The result of all this is that the EU not only addresses Weizenbaum’s concerns about peace, it is also presently the best-positioned jurisdiction to address his concerns about the plausibility of making AI ethically, including well understood. Taken collectively, the EU has the scale required to contest the laws of the nations producing the most powerful AI services. It also has the institutions, values, and explicit intentions to focus on the well being and understanding of ordinary humans, so that we can protect ourselves through our participation in our economies

and democracies. Whether these are enough to give the EU competence is an ongoing empirical experiment.


  1. How EU Legislation Works Towards Human-Centred AI

At one time, it was difficult to have a discussion about AI regulation without someone suggesting that it was controversial or even wrong to focus AI ethics on human concerns, with no regard for the AI itself. As the top tiers of international relations, international law, and human rights have become engaged with the problem, it has been more common to emphasise human-centring as being opposed to centring on corporations or perhaps governments, but never really machines. If machines could be meaningfully said to have any interests at all, because they are artefacts those interests would only exist due to a decision of product design, such as leaving out a backup system for memory. For this reason, Bryson et al. (2017) advocate strongly against constructing law recognising AI interests.

As mentioned in the Introduction, ‘human’-centring in a UN context is now increasingly well-understood to also entail sustainability and concern for biodiversity. This makes sense, because human well being does depend on a healthy environment such as the one our ecosystem tends to stabilise, and living within our resource constraints. Resource conflict can lead to war, and abhorrent violations of human interests. Our planetary ecology cannot be as readily redesigned as our artefacts can. Similarly, our legal system has evolved from prehistoric times, with deep roots in culture and perhaps biology (de Waal, 1996), so where possible technology should be adjusted to facilitate law, not the other way around. This is why—in the first national- level AI ‘soft law’ (for the UK, in 2011), the second of five principles calls for AI to “. . . be designed and operated as far as is practicable to comply with existing laws and fundamental rights and freedoms, including privacy” (Boden et al., 2011; Bryson, 2017, 2018). This principle was adapted for the second (also of five) principles of the OECD (and G20) “AI systems should be designed in a way that respects the rule of law, human rights, democratic values and diversity, and they should include appropriate safeguards—-for example, enabling human intervention where necessary—to ensure a fair and just society” (OECD, 2019).

Both sets of principles also include a principle dedicated to transparency. The British (fourth) principle insists that AI systems “should not be designed in a deceptive way to exploit vulnerable users; instead their machine nature should be transparent." The OECD/G20 some- what softened the language concerning exploitation, but instead require not only transparency, but “responsible disclosure” and that users know they can “challenge outcomes” of a system. Such ‘recommendations’ and soft law have not proved adequate to date; even widespread and horrific automated miscarriages of justice have proven extremely difficult to challenge (Wallis, 2021; Peeters and Widlak, 2023). The EU has two pieces of legislation in place that address this: the General Data Protection Act (GDPR) fully in force since 2018, and the Digital Services Act (DSA), which began coming in to force in 2023.

The vast majority of commercial AI is best understood as extensions of the corporations that provide it, sometimes even at no explicit, financial cost. Our homes, laptops, and pockets contain microphones and cameras: eyes and ears of corporations, and of some governments. Even in countries like the US where government direct gathering of data is prohibited, the government may either purchase (Pasquale, 2015) or steal (Bauman et al., 2014) such data. The EU’s GDPR recognises that personal data is to persons like air space is to nations—new weapons technology makes its defense essential to security. Privacy is not ‘only’ essential for human well being, personal growth, and a robust and creative society (Cohen, 2013; Bryson, 2020). Personal data must also be defended, because otherwise foreign and commercial agencies have undue access to and even potentially some control of what ought to be sovereign—the behaviour of citizens and residents.

The GDPR not only addresses Weizenbaum-like concerns by defending privacy, but also by reifying and requiring explicit consent, and by insisting on transparency not only about how data is collected and processed (it provides also for a right for correction of mistaken data), but

also on a right to review if a decision is made about a data subject in an “entirely automated” way. The GDPR also first demonstrated to the world that the EU was able to in fact govern and protect its residents from harm due to foreign commercial entities. While entities like Microsoft and Google attempted to disrupt the GDPR’s completion, threatening to withdraw their services from the Union, they ultimately preferred access to the 450,000,000 relatively wealthy individuals in the EEA, and (at least approximately) complied (Bradford, 2020).

The GDPR though has not proven sufficient in itself to ensure citizens are not being ma- nipulated. For this reason, the DSA was designed to give corporations proactive obligations to demonstrate the lack of harms created by their services. The DSA is set up specifically to handle the profiling of users, and how this is reflected in targeted advertising and recommendations— what elements of social media or search results are shown to an individual. It would be impossible for the EU to play ‘cops and robbers’, chasing down and inspecting every part of AI businesses processes. But what the Union does do is mandate a set of business practices that can be made subject to occasional inspection, not only after events that lead to calls for investigation, but proactively. The DSA for example encourages corporations to consider and address the risks their services generate, leaving the EU to just ‘check the work.’ It also has a series of other reporting requirements for example concerning content moderation practices, to ensure that these are compliant with EU law. Governance is a collaborative process, and it is in the interest of everyone that it is successful, that the host society is secure and profitable. It is also in the interest of all parties that commercially-provided services are resilient, useful, and safe. Hopefully, experience of the benefits of the DSA will lead global organisations to ensure similar laws apply in other jurisdictions, where they might otherwise have to compete against less-ethical local opponents (a critical part of the Bradford, 2020, ‘Brussels Effect’).

The AI Act is almost an afterthought—I sometimes think it was designed as a decoy so that the Digital Services and Markets Acts (more about the DMA below) could be brought into force relatively unencumbered by lobbying. The AI Act in my opinion achieves just three interesting things:

  • The AIA finally clarifies that digital products are products, and within the remit of product law. That is, corporations are required to do due diligence, to avoid established bad practice and to emulate best practice. Product law is a simple solution to the supposed problem of how to keep law governing complex products up to date: it is the sector that establishes what due diligence and best and worst practice are, though admittedly in cooperation with justice departments. Competitors do not need to worry about a ‘race to the bottom’; they can establish good practice, publish it, and then their sector is obliged to improve with them. Where the AIA considers AI ‘high risk’ (that is, likely to be used to make decisions altering the courses of human lives, e.g. on education, healthcare, or access to financial instruments) it also mandates the sorts of records that need to be stored such that product liability can be more easily defended and enforced.
  • The AIA also determines what AI services are considered incompatible with the EU’s emphasis on human or fundamental rights. Generally these again concern privacy. For example, there is to be no database maintaining records of the location of every human being, or of their ‘social credit score’. Nor should there be a means of localising any arbitrary individual, though there may be surveillance for specific, named individuals such as terrorism suspects or missing children.
  • Finally, the AIA has only one mandate for all AI in the EU—that it should be identified as such. No one in the EU should ever mistakenly believe that they are collaborating with a human when they are really interacting with an artefact.


  1. Peace, Equity, and Enforcement: Conclusions

Human justice only has the capacity to hold adult humans to account—its penalties only persuade living social organisms that can understand its language (Bryson et al., 2017). So

having ‘value-aligned’ AI (van den Hoven, 2007; van Wynsberghe, 2013) must mean that the technology expresses not its own values, but the mutable values of those that own and operate it. For ensuring those owners and operators comply with human interests (including keeping up with changing mores), we have the law. But can the law be sufficient given the power of the companies producing some (but nowhere near most Bryson and Malikova, 2021) AI products? I am persuaded by political philosophy like that of Gowder (2016); Wu (2018), that justice requires enough equity that obligations can be enforced. How to handle transnational infrastructure and public goods is an enormous legal and diplomatic challenge, one that will need to be surmounted if we are going to solve sustainability and limit warfare while defending freedom of thought. We have been in the situation where we had to defend ourselves against such grotesque levels of inequality before, and eventually (following two world wars and a financial crisis) we did a pretty good job of addressing them. We achieved a long period of relative political-economic stability following the Bretton Woods agreement, due in part to increasing justice through equitable participation (Fraser, 2006; James, 2017), and in part by enforcing antitrust law (Wu, 2018). More recently, not only have we succeeded in widespread vaccination during the COVID pandemic, in so doing we seem to have also reduced the influence of populism everywhere in the world—except in the US (Foa et al., 2022).

The US is not enforcing one of its own innovations for maintaining democracy: antitrust law. This is part of the reason the EU has had to be bold in rising to the challenge of regulating technology that stems more from the US than the rest of the world combined (Bryson and Malikova, 2021). It is also the purpose of the final piece of EU legislation I want to mention here: the Digital Markets Act (DMA). I was originally concerned about why the EU was creating an alternative mechanism for ensuring competition law rather than strengthening support for its existing Directorate General of Competition. But the DMA is actually a very interesting piece of legislation. It allows for more agility of enforcement than US law or even previous EU law. Companies that behave anticompetitively can become subject to stronger sanctioning eventually leading to their disaggregation, or they can become subject to weakening enforcement as they find ways to transparently demonstrate their trustworthiness and compliance. This is the legislation of a new age, one embracing the potential for ICT to increase justice, agility, and cooperation.

Ensuring the will and capital to enforce the EU’s new digital legislation will be an ongoing challenge, one we should all hope the EU is up to. The current draining of resources by Russia’s wars of aggression on both Ukraine and with it, the ecosystem are obviously an enormous challenge for the EU and many other nations, particularly of course Ukraine. Nevertheless, the world is literally and quite explicitly watching to see what the EU can achieve with its DSA and DMA. In the longer term, if the EU (or some other power or powers) prove successful in regulating AI—including in making its ‘machine nature’ adequately transparent to not conflict with human relationships and well being—we can all be grateful. And we can further hope that all nations will find ways to update their constitutions and governance styles such that they too can treasure and defend the human experience.


  1. Acknowledgements

Thank you to the Websites (and the projects behind them) of Our World in Data. Thank you to Martin Krzywdzinski for the honour of this invitation, and his patience and persistence. Thank you to Helena Malikova for teaching me a great deal about antitrust, and power.


References

Bauman, Zygmunt , Didier Bigo, Paulo Esteves, Elspeth Guild, Vivienne Jabri, David Lyon, and

R. B. J. Walker (2014). After Snowden: Rethinking the impact of surveillance. International Political Sociology 8(2), 121–144.

Boden, Margaret , Joanna Bryson, Darwin Caldwell, Kerstin Dautenhahn, Lilian Edwards, Sarah Kember, Paul Newman, Vivienne Parry, Geoff Pegman, Tom Rodden, Tom Sorell, Mick Wallis, Blay Whitby, and Alan Winfield (2011). Principles of robotics. The United Kingdom’s Engineering and Physical Sciences Research Council (EPSRC).

Bradford, Anu (2023). Digital Empires: The Global Battle to Regulate Technology. oxford: Oxford University Press.

Bradford, Anu H. (2020). The Brussels Effect: How the European Union Rules the World. Oxford: Oxford University Press.

Bryson, Joanna J. (2017). The meaning of the EPSRC Principles of Robotics. Connection Sci- ence 29(2), 130–136.

Bryson, Joanna J. (2018). Patiency is not a virtue: the design of intelligent systems and systems of ethics. Ethics and Information Technology 20(1), 15–26.

Bryson, Joanna J. (2020). The artificial intelligence of ethics of AI: An introductory overview. In

M. D. Dubber, F. Pasquale, and S. Das (Eds.), The Oxford Handbook of Ethics of AI, Chapter 1,

pp. 3–25. Oxford: Oxford University Press.

Bryson, Joanna J. , Mihailis E. Diamantis, and Thomas D. Grant (2017). Of, for, and by the people: the legal lacuna of synthetic persons. Artificial Intelligence and Law 25(3), 273–291.

Bryson, Joanna J. and Helena Malikova (2021). Is There an AI Cold War? Global Perspectives 2(1), 24803.

Cohen, Julie E. (2013). What privacy is for. Harvard Law Review 126, 1904–1933.

DataReportal, We Are Social, and Meltwater (2023). Worldwide internet user penetration from 2014 to october 2023. Statista. https://www.statista.com/statistics/325706/global-internet- user-penetration/.

de Waal, Frans B. M. (1996). Good Natured: The origins of right and wrong in humans and other animals. Cambridge, MA: Harvard University Press.

Dorfs, Wiebke and Joanna J. Bryson (2024). Global artificial intelligence competition: Examining current state and drivers. in preparation.

Foa, Roberto S. , Xavier Romero-Vidal, Andrew J. Klassen, Joaquin Fuenzalida Concha, Marian Quednau, and Lisa Sophie Fenner (2022). The great reset: Public opinion, populism, and the pandemic. Technical report, Centre for the Future of Democracy, Cambridge University, Cambridge, UK.

Fraser, Nancy (2006). Reframing justice in a globalizing world. In J. Goodman and P. James (Eds.), Nationalism and Global Solidarities, pp. 178–196. Routledge.

Gibler, Douglas M. and Andrew P. Owsiak (2018). Democracy and the settlement of international borders, 1919 to 2001. Journal of Conflict Resolution 62(9), 1847–1875.

Gowder, Paul (2016). The Rule of law in the Real World. Cambridge University Press.

James, Harold (2017). Bretton Woods to Brexit: The global economic cooperation that has held sway since the end of World War II is challenged by new political forces. Finance & Development 0054(003), A002.

Miller, H. Laurence (1962). On the “chicago school of economics”. Journal of Political Econ- omy 70(1), 64–69.

Nations, United (1948). Universal declaration of human rights. Technical Report resolution 217

[A] ([III]), UN General Assembly, Paris.

OECD (2019). Recommendation of the council on artificial intelligence. Technical Report OECD/LEGAL/0449, Organisation for Economic Cooperation and Development (OECD) Legal Instruments, Paris. includes the OECD Principles of AI.

Pasquale, Frank (2015). The Black Box Society: The Secret Algorithms that Control Money and Information. Harvard University Press.

Peeters, Rik and Arjan C. Widlak (2023). Administrative exclusion in the infrastructure-level bureaucracy: The case of the Dutch daycare benefit scandal. Public Administration Review 83(4), 863–877.

Pilling, Franziska , Haider Ali Akmal, Joseph Lindley, Adrian Gradinar, and Paul Coulton (2023). Making AI-infused products and services more legible. Leonardo 56(2), 170–176.

Roser, Max (2021). Extreme poverty: How far have we come, and how far do we still have to go? Our World in Data. https://ourworldindata.org/extreme-poverty-in-brief.

Roser, Max (2023). Demographic transition: Why is rapid population growth a temporary phenomenon? Our World in Data. https://ourworldindata.org/demographic-transition.

Rovny, Jan (2023). Antidote to backsliding: Ethnic politics and democratic resilience. American Political Science Review, 1–19.

Rummel, R. J. (1995). Democracy, power, genocide, and mass murder. Journal of Conflict Resolution 39(1), 3–26.

United Nations General Assembly (2015). Transforming our world: The 2030 agenda for sustain- able development. Technical Report A/RES/70/1, United Nations. Resolution adopted by the United Nations General Assembly at its sixty-ninth session; describes the 17 Sustainable Development Goals (SDGs).

Valentino, Benjamin A. (2004). Final solutions: Mass killing and genocide in the 20th century. Cornell University Press.

van den Hoven, Jeroen (2007). Ict and value sensitive design. In P. Goujon, S. Lavelle,

P. Duquenoy, K. Kimppa, and V. Laurent (Eds.), The Information Society: Innovation, Legiti- macy, Ethics and Democracy In honor of Professor Jacques Berleur s.j., Boston, MA, pp. 67–72. Springer US.

van Wynsberghe, Aimee (2013). Designing robots for care: Care centered value-sensitive design.

Science and Engineering Ethics 19(2), 407–433.

Wallis, Nick (2021). The Great Post Office Scandal: The Fight to Expose A Multimillion Pound Scandal Which Put Innocent People in Jail. Bath Publishing Limited.

Weizenbaum, Joseph (1986). Not without us. ACM Sigcas Computers and Society 16(2-3), 2–7. Wu, Tim (2018). The curse of bigness. Columbia Global Reports.


This was Wednesday. Tonight I'm speaking in Amsterdam if you want to come argue about this.


Comments