I've recently accepted the position
of Full Professor of Ethics and Technology at The Hertie School of Governance, in Berlin.
Hertie is a relatively young graduate school with a research
focus on public policy and international relations. I will
be taking this position up at the beginning of Spring semester
2020 full time. I expect within the next few weeks to finalise
an ongoing relationship with Bath; I expect to remain affiliated
in some way and involved with the Accountable, Responsible and
Transparent AI Doctoral Training Centre. However, from 1
February 2020, I’m delighted to say that I and my partner Will
Lowe will be working at Hertie.
If you think of me as a computer
scientist (or a biologist) then this new position may sound like
a departure. However, I have always been at least as interested
in natural intelligence as artificial intelligence. This new
position realises two very long-term goals I've had:
- to be a
natural and social scientist who uses AI as a tool,
- to work in the same place with my partner, in a country that is foreign to us both (we’re neophiles).
Having said this, my new position in
some ways violates a more recent goal I've acquired:
- to serve my country in its time of crisis.
In fact, I have two countries and they are both
in crisis right now. In fact, I had sort of expected that given
the present situation in the world, my next job might be a
position of leadership or even government, since many people
whose careers I admire have moved back and forth in and out of
government. In fact, I did in the last year or so apply
(in response to invitation) to three positions of scientific
leadership. One was in London (as the chief scientist for the
Government’s Department of Digital, Culture, Media, and Sports,
one was in Edinburgh (the Baillie Gifford Chair in the Ethics of
Data and Artificial Intelligence in the University’s new
Future’s Institute), and one in San Francisco (the Scientific
Director for the Partnership of AI). So it’s not so much that I
wanted to run away from Brexit – I would absolutely have stayed
in the face of crisis if I thought I was in a position where I
might have had a fighting chance to address that crisis.
However since I was a very small
child, my first love has always been science. And over the last
four years, as I have been increasingly spending my time trying
to help governments with policy questions, a lot of my
scientific curiosity has been piqued around questions to do with
politics, economics, and governance. The position I have landed
is one where I wasn’t invited to apply by the institution, but
rather I was sent the notice of the job by not just a friend,
but three different friends, who emailed that they’d spotted the
“perfect” position for me in Berlin – and one even had an idea
of a colocated job for my partner. Our applications have been
successful, and everyone is thrilled. I’ve already had new
colleagues writing me with research and teaching collaboration
ideas.
Changing disciplines
Many people who read this blog know
me as a researcher in AI ethics. My focus clearly will not be
purely AI, but then it never has been. Two of my four degrees
are in social sciences. I have always viewed AI primarily as a
means by which I can understand natural intelligence.
Intelligence is the capacity to do the right thing in response
to a context--now I will be studying that capacity primarily in
the context of governance. Ethics is the means by which a
society maintains itself; government is one of those means,
though obviously not the only one. Technology is a means by
which we extend ourselves.
Government is a key component to
maintaining cooperation at the scale we've seen since before the
advent of history. Government is the (re)distribution of
resources to give a society sufficient stability and security
that it persists. Since the twentieth century, the peoples of
the planet through the United Nations have formally acknowledged
that one key to this is recognising the rights of individual
humans, including crucially the right to freedom of opinion and
thought – a right surveillance challenges, and intelligent
technology facilitates surveillance. More generally,
digital technology, and perhaps any distance-reducing
technology, seem to challenge the capacities of governments to
govern, not least by requiring redistribution that goes beyond
the geographic borders defining the regions over which
governments are able to govern.
Politics and institutions haven’t
suddenly become my sole interest – in fact, this weekend I’ve
been working on a journal article about the nature of biological
evolution as explored through simulated gene regulatory
networks. But they are an incredibly important application area
of the sorts of scientific interests I do have, and it’s
fantastically exciting to be going somewhere that has been
making tremendous hires of people with expertise in these areas.
Also, given that at this moment I’m someone that governments,
corporations, NGOs, and reporters do ask about policy, it’s good
that I know as much as I can about these areas, so I can be as
helpful as possible. So in a way, I expect I will still be
serving both my countries in the best way I can, or at least in
the best way I’ve been afforded.
Keeping my second citizenship
I have lived 18 of the last 28 years
in the UK. I chose to apply for a passport there (rather than
maintain my previous, wonderfully-British immigration status
"indefinite leave to remain") in 2007 because I wanted to be a
citizen – of the EU. I had been consulting for the European
Commission since soon after my arrival at Bath, and I became
impressed by the EU as a means of coordinating national
governments. I was fascinated by its alien organisation – was it
even a democracy? (Yes, it turns out, but the UK should have
been making that way more clear. Sadly, I wasn’t the only one
there confused.)
I sincerely admire the UK, not only
for its culture, but for the leadership it's shown in AI. In
fact, I didn’t have much interest in or understanding of
leadership, governance, and policy until sometime after
colleagues involved me in my first policy meeting, the
EPSRC/AHRC Robot Ethics retreat in 2010. That meeting (and
my presence at it) resulted in the Principles of Robotics, the first
national-level AI ethics softlaw, and one that clearly underlies
the 2019 OECD (and now G20) Principles of AI, that dozens of
nations signed this year – including the USA and China. AI
and its governance are two of the few positive assets that the
May government dedicated any resources to, and even Johnson has
just dedicated his UN speech to this topic. The UK is as far as
I can tell leading the world in hiring up it’s regulatory teams.
I sincerely admire and am in awe of the teams the British are
putting together. I wish I could have been more directly
involved. I hope the UK is able to continue deploying its genius
in leadership in AI governance, and I hope I can still be able
to help. Of course, I very much still hope they will do
this from within the European Union.
Working in policy
Some of the greatest problems of
governance at the moment are transnational. We have not
adequately dealt with the problems of natural transnational
monopolies that our technology has been affording since not only
the digital revolution, but also the advent of oil magnates,
analogue telecommunications, aerospace, multinational
pharmaceuticals, "high" finance, and so forth. Entities where the cost of
transport is so low and the benefits of scale or expertise are
so high that it's difficult for local versions to compete
without substantial subsidy.
Geography will always matter. Your
quality of life is highly determined by the wellbeing of your
neighbours, including your mutual access to healthcare, water,
clean air, and education. Security concerns and many economic
opportunities depend on resources, terrain, climate and
neighbours, and so therefore will tradeoffs in substantial
questions of government also vary with location.
I am presently convinced that it
makes sense for power to be focussed foremost at national and
secondarily at regional levels, both because of the problems of
coordination (including managing corruption) at scale, and also
because diversity is essential to robustness, evolution, and
other types of change. We need multiple, localised, and
specialist governments to explore diverse paths forward. Yet
governance works best at regulating entities within a set of
borders, and some powerful and important entities transcend
national borders.
I believe that the EU, while not
perfect, is the best template we have so far on how to
coordinate action between governments to address problems like
managing transnational human-made forces, and transnational
resources like biodiversity and the climate. In the EU, member
countries have the laws, the courts, and the military. The
transnational EU parliament, council, etc. coordinate policy and
write treaties concerning the nature of some of the laws member
countries will write, where it makes sense for the block to be
"harmonised," that is, to act as a unit.
I hope that there will be other
global-regional hubs for coordinating policy. Europe may be
leading by demonstration now, but ultimately the world is a big
place, with diverse problems and opportunities, and huge numbers
of increasingly empowered people. Perhaps America and China
already sort of are hubs like the EU, though with less devolved
regional power and therefore less diversity of thought at the
executive level. But ideally everyone would live in a country
that works with others in such a way that they can wield the
kind of power it took to enact the GDPR, and to address the
democratic and ecological challenges we are all facing.
Three years ago towards the end of my
2015-2016 sabbatical I reported the projects I was pursuing to
my then head of department and he said "only that thing about bias is actually
computer science.” He was worried that I needed to focus on the
metrics the UK government uses to determine who to give research
funding to.
While I'm very glad the semantics
paper came out, in my own assessment, the work I was doing on
political polarisation was far more important,
and had just as much business being done in a computer science
department. Departments (like countries) are important
infrastructure, but they should enable academics and the pursuit
of academic research, not hamper it.
Hertie is by design highly
interdisciplinary. I’m not really even in a department, though
I will be a founding member of their Centre for Digital
Governance, I anticipate collaborating with researchers from all
Hertie’s Centres of Competence.
Pursuing the questions I now have will be much easier with the
sort of expertise Hertie cultivates close at hand.
Systems Engineering of AI
Having said that, I’m not ready to
leave the discipline of AI itself entirely behind either. I have
heard leading experts in AI saying that the next big frontier
for AI is Systems AI, the systems engineering of intelligent
artefacts. This is exactly the area I identified to specialise
in when I was picking a thesis topic after coming to the MIT AI
Lab back in the 1990s. The anthropomorphism that comes along
with “intelligence” seems to keep encouraging people to look for
magic single algorithms (the spark of life) that will somehow
“teach themselves”, as if anything in Nature does that. Well, a
lot of complex behaviour has been programmed into life through
billions of years of evolution, but individual organisms learn
very little on their own. The more cognitive a species in
general the more social.
Anyway, for my PhD, rather than
looking for incremental improvement in some algorithm for
learning or planning, I wanted to look at how to make great AI
by bringing together the innovations that were already out there
into systems that worked. This means, I wanted to help people
program AI, particularly real-time, human-like AI. I was trying
to make it easier to program robots, but I mostly myself used it
for doing science, and in the end the majority of people who
took up my PhD output actually put it into creating computer
game characters (it took me years to notice, but Alex
Champandard finally drew the uncited connection between my work
and behaviour trees on his old AI-GameDev Website).
Fortunately, three of my recent PhD
students worked on the Systems Engineering of AI, and two of
them (Rob Wortham & Andreas Theodorou) also worked on AI
ethics and AI transparency with me. They and the third,
Swen Gaudl, all have academic positions now, and I anticipate I
will continue being able to work with them not only on making it
easier to build AI, but on making it more de rigueur to build AI
that clarifies accountability.
Bath and ART-AI
I am leaving behind in Bath two
institutions I had a major role in bringing into being: the AI
group in the Department of Computer Science which I founded and
for a period represented / “led” (like anyone can lead
academics), and the ART-AI doctoral training centre, a strongly
interdisciplinary programme for creating and maintaining
transparency in and accountability for AI. Every PhD student at
ART-AI is required to have supervisors from at least two
different faculties, let alone departments, and to take
graduate level courses in a faculty not represented by their
previous degrees. My recently former head of department, Eamonn
O’Neill, is now the head of this effort. I wish him well, and
hope to continue to collaborate with the centre and that Hertie
will become one of the many partners of ART-AI.
After four years in New Jersey, we're heading back to the land of transport options.
Comments