Living with AGI

I was at a Chatham Rules meeting on AI and Law / Governance last weekend. Everyone was smart, but a minority openly believed in Artificial General Intelligence, AGI.  This was mostly magical thinking, assuming that infinite computation could be had for free, one person even said they expected the world to be "post-scarcity" within 10 years. We will never be post-scarcity for time or space, though I can believe that given the limits the nature of our species sets on those two things, we can be effectively post-scarcity on energy, maybe soon. But even if we achieve that, that will only be the beginning of a solution for sustainability.

Still, it was interesting to see what work these individuals' beliefs about AGI was doing in their theory of governance. Earlier this year, I wrote that there were two definitions of AGI and addressed and dismissed them in a footnote,  but now I see people are using a third definition we cannot entirely dismiss.

I want to delineate these here, mostly to draw attention to what is happening with AI now. In so doing, I will point out that I predicted 2016 in 2014, in a book chapter published in 2015.

Four Definitions of AGI

  1. A pejorative used against people working on AI, assumes that these people have "failed" because they haven't been working on the "right problems" or don't care about the "big picture."  This usage was most widespread about 1995-2015; you don't hear about AI having failed so much these days. Update August 2019: People are pointing out that the Wikipedia/official page definition of AGI is basically still this one. But many people that defend this definition still seem to slip into definitions 3&4 below fairly quickly in discussion.
  2. The assumption that AI is just math and we are converging on the one true algorithm that will make us omniscient. Entails that the first country / company to discover this magic power will be omnipotent as well. This is just flat wrong; AI is computation, not math. It takes time, space, and energy. As Chris Bishop says, the job of machine learning is not finding one algorithm, it's learning to optimise which algorithm is best applied to which problems. See further the No Free Lunch theorems
  3. The assumption that as machines are more intelligent they are more humanlike, so as AI improves humans and machines will become identical.  This is also just dead wrong.  Nothing is going to make machines that are exactly like apes, and no machine will share motivations and percepts with humans as much as chimpanzees, dogs, or even fish do.  This is not mathematically impossible like definition #2, but it is sufficiently spectacularly unlikely that we would build (or cause to be built) a machine exactly like a human that it can equally well be dismissed. Definitions 2 & 3 are the ones I dismissed in my 2018 footnote.
  4. The final definition of AGI that I observed this weekend is rooted in 2&3 above, but has merged with Superintelligence: the observation that if a system learns how to learn it will compound information and grow and change exponentially. Here the mathematical logic is sound. My problem with superintelligence is not that it isn't coherent, but that people over anthropomorphise it and shunt it into the future. Superintelligence describes human culture perfectly. It's already here, and already having unintended side effects.
Thus many of my colleagues as they discussed a near-future AGI that took over various problems or powers from humanity wound up re-describing already existing problems of governance and law with the addition of technology.  Some of that work is very useful, but we need to embrace the fact we are describing ourselves, the superintelligent species, the apes with the AI.  Because we need to address these problems now, not prepare for them in 20 years. And possible solutions do not include agency or consciousness detectors riding on top of every neural network. Unfortunately, it's more complicated than that.

Problems of living with A(G)I

Here below is a more detailed description of how society is altering under AI and modern ICT. I wrote this in 2014, it was published in 2015, and then made manifest in a couple of elections in 2016. It's 2018 now.  We need to fold all this exciting work on AGI into the maybe seemingly more mundane but actually fantastically beautiful and complex problems of governance and education we have in front of us today.  The full book chapter is here: https://link.springer.com/chapter/10.1007/978-3-319-15515-9_15  an open access version is here: http://opus.bath.ac.uk/41580/

15.5 The Impact of AI on Human Cooperation and Culture
My main objective in this chapter is this: to convince you that AI is already present and constantly, radically improving; and that the threats and promises that AI brings with it are not the threats and promises media and culture have focussed on, of motivated AI or superintelligence that of themselves starts competing with humans for resources. Rather, AI is changing what collective agencies like governments, corporations and neighbourhoods can do. Perhaps even more insidiously, new affordances of knowledge and communication also change what even we as individuals are inclined to do, what we largely-unconsciously think is worth our while. ‘Insidious’ is not quite the right word here, because some of these effects will be positive, as when communities organise to be safer and more robust. But the fact that our behaviour can radically change without a shift in either explicit or implicit motivations— with no deliberate decision to refocus—seems insidious, and may well be having negative effects already.
As I indicated in Sect. 15.2, we are already in the process of finding out what happens when our ability to read and predict the behaviour of our fellows constantly improves, because this is the new situation in which we find ourselves, thanks to our prosthetic intelligence. Assuming the output of commercial AI remains available and accessible in price, then the models of the previous section tell us we should expect to find ourselves more and more operating at and influenced by the level of the collective. Remember that this is not a simple recipe for world-wide peace. There are many potential collectives, which compete for resources including our time. Also and it is possible to over-invest in many, perhaps most public goods. The models of Roughgarden and Taylor describe not systems of maximal cooperation, but rather systems of maximising individual benefit from cooperation. There are still physical and temporal limits to the number of people with whom we can best collaborate for many human goals (Dunbar 1992; Dunbar et al. 2009). We might nevertheless expect that our improved capacity to communicate and perceive can help us to achieve levels of cooperation not previously possible for threats and opportunities that truly operate at a species level, for example response to climate change or a new pandemic.
Our hopes should be balanced and informed though also by our fears. One concern is that being suddenly offered new capacities may cause us to misappropriate our individual investments of time and attention. This is because our capacity for cooperative behaviour is not entirely based on our deliberating intelligence or our individual capacity for plasticity and change. Learning, reasoning and evolution itself are facilitated by the hard-coding of useful strategies into our genetic repertoire (Depew 2003; Rolian 2014; Kitano 2004). For humans, experience is also compiled into our unconscious skills and expectations. These are mechanisms that evolution has found help us address the problems of combinatorics (see the first paragraph of Sect. 15.4.2). But these same solutions leave us vulnerable for certain pathologies. For example, a supernormal stimulus is a stimulus better able to trigger a behaviour than any that occurred in the contexts in which the paired association between stimulus and response was learned or evolved (Tinbergen and Perdeck 1950; Staddon 1975). Supernormal stimuli can result from the situation where, while the behaviour was being acquired, there was no context in which to evolve or learn a bound for the expression of that behaviour, so no limits were learned. The term was invented by Tinbergen to describe the behaviour of gull chicks, who would ordinarily peck the red dot on their parent’s bill to be fed, but preferred the largest, reddest head they could find over their actual parents’. Natural selection limits the amount of red an adult gull would ever display, but not the types of artefacts an experimental scientist might create. Similarly, if a human drive for social stimulation (for example) is better met by computer games than real people, then humans in a gaming context might become increasingly isolated and have a reduced possibility to meet potential mates. The successful use of search engines—quick access to useful information—apparently causes a reduction in actual personal memory storage (Ward 2013). This effect may be mediated by the successful searcher’s increased estimation of cognitive self worth. Though Ward describes this new assessment as aberrant, it may in fact be justified if Internet access is a reliable context.
The social consequences of most technology-induced supernormal stimuli will presumably be relatively transient. Humans are learning machines—our conscious attention, one-shot learning, and fantastic communicative abilities are very likely to spread better-adapted behaviour soon after any such benign exploitation is stumbled over. What may be more permanent is any shift between levels of agency in power, action, and even thought as a consequence of the new information landscape. The increased transparency of other people’s lives gives those in control more control, whether those are parents, communities or school-yard bullies. But control in this context is a tricky concept, linked also with responsibility. We may find ourselves losing individual opportunities for decision making, as the agency of our collectives become stronger, and their norms therefore more tightly enforced.
The dystopian scenarios this loss of individual-level agency might predict are not limited to ones of governmental excess. Currently in British and American society, children (including teenagers) are under unprecedented levels of chaperoning and ‘protection’. Parents who ‘neglect’ their children by failing to police them for even a few minutes can be and are being arrested (Brooks 2014). Lee et al. (2010 special issue) document and discuss the massive increase over the last two decades in the variety as well as duration of tasks that are currently considered to be parenting. Lee et al suggest that what has changed is risk sensitivity, with every child rather than only exceptional ones now being perceive as ‘at-risk’, by both parents and authorities. This may not be because of increased behavioural transparency afforded by technology and AI. Another possible explanation is simply the increased value of every human life due to economic growth (Pinker 2012). But what I propose here is that the change is not so much in belief about the possibility of danger, as the actuality of afforded control. Social policing is becoming easier and easier, so we need only to assume a fixed level of motivation for such policing to expect the amount of actual policing to increase.
Another form of AI-mediated social change that we can already observe is the propensity of commercial and government organisations to get their customers or users to replace their own employees. The training, vetting and supervision that previously had to be given to an employee can now be mostly encoded in a machine—and the rest transferred to the users—via the use of automated kiosks. While the machines we use to check out our groceries, retrieve our boarding cards and baggage tags, or purchase post office services may not seem particularly intelligent, their perceptual skills and capacities for interaction are more powerful and flexible than the older systems that needed to be operated by experts. Of course, they are still not trivial to use, but the general population is becoming sufficiently expert in their use to facilitate their replacement of human employees. And in acquiring this expertise, we are again becoming more homogenous in our skill sets, and in the way we spend that part of our time.
With AI public video surveillance our motions, gestures, and whereabouts can be tracked; with speech recognition our telephone and video conversations can be transcribed. The fact some of us but not others spew information on social media will rapidly be largely irrelevant. As better and better models are built relating any form of personal expression (including purchases, travel, and communication partners) to expected behaviour (including purchases, votes, demonstrations, and donations), less and less information about any one person will be needed to predict their likely behaviour (Jacobs et al. 1991; McLachlan and Krishnan 2008; Hinton et al. 2006; Le Roux and Bengio 2008).
Although I’ve been discussing the likely homogenising impact of increased AI and increased collective-level agency, collective agency is not necessarily egalitarian or even democratic. Again we can see this in nature and our models of the behaviours of animals very similar to us. In non-human primates, troops are described as either ‘egalitarian’, where any troop member can protest treatment by any other, and conflict is frequent but not violent; or as ‘despotic’, where interaction is limited by the dominance hierarchy, aggression is unilateral from dominant to subordinate, and fights while few are bloody (Thierry 2007). Which structure a species uses is partially determined by historic accident (phylogeny, Shultz et al. 2011), but also significantly by the species’ ecology. If a species’ preferred food source is defensible (e.g. fruit rather than insects) then a species will be more hierarchical, as it will under the pressure for safer spatial positions produced by the presence of predators (Sterck et al. 1997). The choice between social orders is not made by the individual monkeys, but by the dynamics of their ecological context.
Similarly, we cannot say exactly the power dynamics we expect to see as a consequence of increasing agency at collective, social levels. However a worrying prediction might be drawn from Rawls (1980), whose theory mandates that a ‘veil of ignorance’ is necessary to ensure ethical governance. Those in power should be under the impression that any law they make might apply to any citizen, including themselves. Can such ignorance be maintained in an age of prosthetic intelligence? If not, if those in power can better know the likely social position of themselves and their children or even the likely outcome of elections (Wang et al. 2015), how will this affect our institutions? As uncertainty is reduced, can we ensure that those in power will optimise for the global good, or will they be more motivated—and able—to maintain control?
The answers to these questions are not deterministic. The models presented in Sect. 15.4 make ranges of predictions based on interactions between variables, all of which can change. Our future will be influenced by the institutions and regulations we construct now, because these determine how easy it is to transition from one context into another, just as available variation partially determines evolution by determining what natural selection can select between (see footnote 9). Although many futures may be theoretically achievable, in practice the institutions we put in place now determine which futures are more likely, and how soon these might be attained.
Humans and human society have so far proved exceptionally resilient, presumably because of our individual, collective and prosthetic intelligence. But what we know about social behaviour indicates significant policy priorities. If we want to maintain flexibility, we should maintain variation in our populations. If we want to maintain variation and independence in individual citizens’ behaviour, then we should protect their privacy and even anonymity. Previously, most people were anonymous due to obscurity. In its most basic form as absolute inaccessibility of information, obscurity may never occur again (Hartzog and Stutzman 2013). But previously, people defended their homes with their own swords, walls and dogs. Governments and other organisations and individuals are fully capable of invading our homes and taking our property, but this is a relatively rare occurrence because of the rule of law. Legal mandates of anonymity on stored data won’t make it impossible to build the general models that can be used to predict the behaviour of individuals. But if we make this sort of behaviour illegal with sufficiently strong sanctions, then we can reduce the extent to which organisations violate that law, or at least limit their proclivity for publicly admitting (e.g. by acting on the information) that they have done so. If people have less reason to fear exposure of their actions, this should reduce the inhibitory impact on individuals’ behaviour of our improved intelligence.
Already both American and European courts are showing signs of recognising that current legal norms have been built around assumptions of obscurity, and that these may need to be protected (Selinger and Hartzog 2014). Court decisions may not be a substitute though for both legislation and the technology to make these choices realistically available. Legislating will not be easy. In Europe there has been concern that the de facto mechanism of access to the public record has been removed as search engines have been forced not to associate newspaper articles with individuals’ names when those individuals have asked to be disassociated from incidents which are entirely in the past (Powles 2014). As we do come to rely on our prosthetic intelligence and to consider those of our memories externalised to the Internet to be our own, such cases of who owns access to which information will become increasingly complex (Gürses 2010).
The evolution of language has allowed us all to know the concept of responsibility. Now we are moral agents—not only actors, but authors responsible for our creations. As philosophers and scientists we have also professional obligations with respect to considering and communicating the impacts of technology to our culture (Wittkower et al. 2013). AI can help us understand the rapid changes and ecological dominance our species is experiencing. Yet that same understanding could well mean that the rate of change will continue to accelerate. We need to be able to rapidly create, negotiate and communicate coherent models of our dynamic societies and their priorities, to help these societies establish a sustainable future. I have argued that the nature of our agency may fundamentally change as we gain new insights through our prosthetic intelligence, resulting in new equilibria between collective versus individual agency. I’ve also described scientific models showing how these equilibria are established, and the importance of individual variation to a robust, resilient, mutable society. I therefore recommend that we encourage both legislatures and individual citizens to take steps to maintain privacy and defend both group and individual eccentricity. Further, I recommend we all take both personal and academic interest in our governance, so that we can help ensure the desirability of the collectives we contribute to.

from Artificial Intelligence and Pro-Social Behaviour
Joanna J. Bryson
in Collective Agency and Cooperation in Natural and Artificial Systems pp 281-306 (Springer, 2015). Again, the full book chapter is here: https://link.springer.com/chapter/10.1007/978-3-319-15515-9_15  an open access version is here: http://opus.bath.ac.uk/41580/


Comments

Bala said…
I am leaning a lot from your writings, I wonder why I am not able to see these-, as many I would have read flashy papers with great mathematics out fit but conceptually poor.
Anonymous said…
Fantastic article! Your insights are not only well-researched but also presented clearly and engagingly. Artificial general intelligence future importance is immense, with potential to revolutionize all aspects of life. Its ability to learn and adapt could solve complex problems, but ethical considerations are crucial for responsible integration.
Anonymous said…
Fantastic article! Your insights are not only well-researched but also presented clearly and engagingly. Artificial general intelligence future importance is immense, with potential to revolutionize all aspects of life. Its ability to learn and adapt could solve complex problems, but ethical considerations are crucial for responsible integration.