AI and Future Generations – Statement for the UK Parliament APPG

See also (related):
Thursday I spoke at the UK Parliament to a meeting of an All Party Parliamentary Group on Future Generations. Ed Felten spoke first, then me, then Nick Bostrom (we weren't warned of the order in advance.) Then Shahar Avin spoke about our report on the Malicious Use of AI (he was co-first author of that report, with Miles Brundage). We didn't use slides, so I actually wrote up my talk, but in Keynote so I modified it a bit on the fly during Ed's talk, and then went back and edited it again after I gave it so this is a fairly close match to what I said, though I skipped a few lines when I was reading it out.

First I want to thank you for this opportunity to speak. It’s a great honour to be asked to influence this body at this critical time. I’ve been an employee of the University of Bath since 2002 and a British citizen since 2007. But since at least the sixteenth century, the United Kingdom has had a disproportionate influence first regionally and then globally on the humanities, sciences, innovations in politics and governance, industry, and of course security. This parliament’s actions will absolutely affect future generations not only of the UK but of the world, and I’m very grateful for an opportunity to play however small a part in that.

Like Ed, I want to also talk about AI with a very long-term view. I disagree with a little of of what he said: I would say there is no more general intelligence than Bayes law, I think AI is already > NI in every regard. But I come to this with two degrees each in Psychology (non clinical) and AI, so my perspective is informed by that and by my experience developing AI myself and with my group at Bath. I do want to {leave it to | thank} Ed Felten for making it clear that AI causing THE singularity is very unlikely. I also want to emphasise that the key role of legislation and other governance is maintaining human accountability, which will motivate transparency and security, helping us NOT lose control. But I won’t speak further about that now.

Often in both machine learning and the sciences we need to discriminate between two sorts of processes: a step change versus a gradient change. Obviously gradient changes make local prediction easier, though sometimes the interaction of multiple gradients can lead to something that is effectively a step change, a singularity. But a large number of step changes can also become effectively a gradient, and I think that’s the best way to consider AI.

AI because it challenges our identity seems like the sort of thing that must be a step change, but this has I think blinded us to the fact that we’ve had various forms of AI for some time. Examining that history can help us understand the new problems of governance and security that the advent of the digital telecommunication is producing. 

First I want to make a few things clear. Intelligence relies not on math, but on computation. Computation is the transformation of information, and intelligence is the transformation of perception to action, the capacity to recognise an opportunity or challenge, and address it.
Computation is a physical process, taking time, energy, & space. Mistaking it for math may be why we naively expect AI to be pure, ethical, omniscient, and eternal.  It isn’t.  The extent to which math is these things is because math isn’t real; math is an abstraction.
Finding the right thing to do at the right time requires search. Cost of search = the number of options raised to the number of acts. Examples: 
  • Any 2 of 100 possible actions = 100^2 = 10,000 possible plans.
  • # of 35-move games of chess > # of atoms in the universe.
Omniscience is not a real threat. No one algorithm can solve all of AI, so AI investment is not a race to discover that algorithm. It’s an arms race. 
Concurrency can save real time, but not energy, and requires more space. Quantum saves on space (sometimes) but may (probably) cost WAY more energy.

Understanding that concurrency helps address the costs of computation is critical to understanding what humanity is, why we are dominating the ecosystem, and most importantly now how AI impacts both past and future generations.

The critical difference between cognitive species like primates and other less-cognitive species like bacteria is that we as individuals do so much computation, rather than relying on evolution to solve all our computational problems for us. This is a risky strategy, we have to invest a lot in every individual. But it allows us to take advantage of more transient and unpredictable opportunities.

The critical difference between us and not just other primates, but our own immediate ancestors, is that we are able to share our computation so efficiently. Language was the first AI, and the advent of writing I would argue is one of the best examples of a super-intelligence step change.

Even before writing, when homo sapiens got to a continent, we started transforming its biodiversity, making ourselves and our livestock the primary large mammals. Since the industrial revolution there is now substantially more biomass on the planet–we’ve done that! Transformed minerals into life! But there are very, very few large mammals left on the planet that we don’t eat or play with. Biomass of humans alone > biomass of all wild terrestrial mammals.

Which leads me to point out that there are really only two problems humanity (or any other species) has to solve. These are sustainability and inequality, or put another way, security and power. Or put a third way, how big of a pie can we make, and how do we slice up that pie. Because it’s not a zero sum game. As I mentioned, culture has allowed us to literally make more biomass. And more generally, many species use the security of sociality to construct public goods. But every individual needs enough pie to thrive.

Which leads me to my final and main points: how is AI transforming society now, and how will it in the near and distant future? The supposed step change that is digital AI is this: not only communication but now also computation is being done for us by our technology. But really, having 8B healthy and well educated humans reliably connected already radically alters our access to computational power.

What we are getting with all this computation are radically new ways to act and perceive. This is both empowering individuals (for better and for worse, and I thank Shahar Avin for talking about our malicious actors report) and making them more exchangeable, which reduces salary differentiation and increases inequality. And let’s be clear here, empirically the best Gini coefficient is around .27 [IMF blog post] when you go below that you can’t give innovators the resources they need, but above that you undermine security and the economy. And we are doing that again, as we did in the early 20th century.

As I mentioned, intelligence is the transformation of perception to action, and what AI is allowing us to do now is perceive things we’ve never seen before, particularly at distance. We can now easily see over national boundaries, which is fantastic for allowing us to deal with transnational challenges such as climate change or how to fairly distribute revenue from transnational corporations.

But many problems will always be to do with geographic location, and require the consensus-making power of local governance. What our neighbours spend on protection from fire and crime, on education, on health, on water and agriculture, affects us so deeply that we have to coordinate through geographically-specific government our mutual decisions on investments in these. One issue with having the world opened up to us is that we neglect these local problems, and allow the onset of corruption or underinvestment.

Another thing we are perceiving and able to act on in ways never possible before is how individual humans are likely to behave. There’s no opting out of this – the better a model we build, the less data we need to make good predictions. We used to talk about data hygiene, but we can’t even get people to wash their hands. And I want here to strenuously disagree with something Kate Crawford said at the Royal Society Tuesday. Prediction of behaviour from data like photographs and social media clicks is not pseudo-science. It is a demonstrated and improving capacity. And hounding academics who study this openly and publicly is one way to assure that we won’t know the capacities of companies like Cambridge Analytica in good time.

I want to close by focussing on employment. Employment is a form of security, it binds a local community together. 
  • If we have money, we want to pay people partly so we have more capability and power, but partly because it makes us feel good, it improves our security because people depend on us.
  • Similarly, if we have a skill, we want to get people to pay us, partly because that gives us more capability and power (money), but partly because it makes us feel good. It again improves our security because someone depends on us.
So I want to go even further out on a limb here and also disagree with something Barack Obama said this week. I think better than basic income is raising minimum wage and ensuring various employment conditions are met. These are proven means of getting more money to more people so they can hire each other.

Automation doesn’t necessarily reduce employee headcounts.  There are more human tellers since the introduction of ATMs (Autor 2015) and their jobs are more interesting (harder), because you need fewer per branch. It brings down the cost of branches, which has increased their number.  I learned recently in a visit to RBS:  Branches cost a lot less (so more branches,  per Autor), also because you’ve also lost high-paid branch managers (less wage differentiation, more inequality.)

At RBS, using AI chat bots has actually increased the number of phone contacts from customers – chatbots seem to make customers feel more friendly about the institution; they call more.  But the bots solve all the easy questions – some existing customer-support agents are no longer helpful.  RBS may repurpose these to support customers who can’t use bots.

More generally, this implies there is no digital divide; there’s a digital spectrum, and AI will continue to filter people into and out of jobs in new ways.  This requires government safety nets such as unemployment insurance, and access to education. Not a step change, a gradient. But it’s not a baby (as Ed said).  AI is already producing existential threats, like the break up of the EU and NATO.

Thank you.

I mentioned that Nick Bostrom was gifted the slot after mine.  Here are the two things I would have corrected about his talk if I'd come after him.
  1. Nick said that there were clear cases of exponential exponential growth in AI, like DeepMind's alphago, and indeed that showed how AI could overcome the combinatorics I had described. 
    1. Both Ed Felten and Miles Brundage have given talks with graphs showing that in fact improvement on Go was linear and it was entirely predictable within a few months when we would beat human level Go playing. That we were surprised by this goes back to my point about identity confusion, we ignored steady linear progress until that threshold was met.
    2. Both Chess and Go are games. It's not at all clear that the techniques of alphaGo Zero would be able to search such large spaces if those spaces weren't designed to be fun for humans.
    3. I agree though that we are getting better at every individual thing humans do, though often not by that much. This is because we've gotten very good at uploading the computation we've already done into AI via ML and various other techniques.
    4. But the most important point is one Ed made in his talk: we aren't going to build apes. We aren't going to collect all those things into one system and give it the same motivations that apes have.
  2. Nick said that AI safety has been taken seriously for about 5 years and there were a few groups working on it now at e.g. Oxford, Berkeley. This is false and drives me insane.  AI safety is part of software safety and has been worked on for decades.  We call this systems engineering, and sometimes safety critical systems.  This is what my PhD was about that I worked on in the 1990s and defended in 2001 – systems engineering for real-time human-like AI. I literally had a paper in 2002 in a AAAI symposium on Safe Learning Agents, with of all people Marc Hauser as a coauthor.


Comments

Michael Lyons said…
Hello Joanna,

I found your remark about Crawford's statement interesting and wonder if you have elaborated on that elsewhere?

I am an academic who has been hounded, though we were not even predicting behaviour, but it may have appeared that way to the (very) uninformed.

There is an introduction in the following short essay at Medium, with links to two articles giving more detailed accounts.

https://medium.com/@michael.lyons_85617/indiscretions-of-a-contemporary-artist-88c9528a3ec1?source=friends_link&sk=71195593eceb813988d5daf1222b4f5b

With best wishes,

Michael Lyons