 |
Prof Margaret Boden; Credit: Jay Williams (click through for source) | |
It's normal though tragic for the elderly to die. What's more tragic is the number of people posting they wish they'd met Margret Boden. What it took to meet her – frequently! – was participating in UK AI meetings. She was deeply committed to for example
AISB, and frequently honoured by it. Sadly – despite the UK long leading globally in AI – British research councils stopped supporting such meetings because the meetings were merely "national." We all in every locale need to support our local networks, universities, and students as well as putting some time into keeping up with more global efforts. Maggie was fantastic at this – I'm pretty sure both of the most recent two times I talked to her were at the Houses of Parliament, where Steve Torrance often supported her participation, given her mobility issues. She came not only when she spoke, but also to listen in the audience. Keeping engaged with those trying to improve the world through the British government and media was Boden's project through her long retirement.
More tragic even than these missed opportunities are the number of people being killed, maimed, and starved in
Gaza, Ukraine, Sudan, Sahel, Ethiopia, and Myanmar or even just losing their health and the days of their lives in illegal detention centres in countries like the United States. As I struggle with the tragedies of losing
people born in 1936, I wonder how to effect a future where we can be more efficacious in defending the lives of people who should be this planet's future.
Thanks to Margaret Boden for all the work she did over the years, and to everyone who loved and promoted and supported her. May we all do the same for each other.
The above is what I wrote on facebook about the death of Margaret Boden. I also posted it here under the title
Margaret Boden and Gaza
Was that over dramatic? I honestly do feel bad worrying about two elderly people (
the other was my father) both born in 1936 and died in July quite naturally, while hundreds of thousands of others are being killed deliberately and horribly at all stages of life.
A few days after I posted the above here, Nature asked me to write an obituary for Margaret Boden. I probably should have turned them down, and I did initially, suggesting Ron Chrisley instead. I couldn't really see why they had suggested me other than sexism, but no doubt some AI system had noticed my blogposts and coauthorship(s) with her. Really the only the only thing we ever wrote together was the UK's 2011 Principles of Robotics policy "soft law" in a large group of people, but we are attributed as coauthors also of the 2017 archival journal version of the Principles, which is what I linked to there.
I thought that was it. Last I knew, Ron lived in California, so I thought they would contact him and I'd wake up to the news he was writing it. As it happened, I instead woke up at 3am with an idea, and wound up writing the 900 words they'd asked me for in about 3 hours, including time spent going through some of her publications and interviews, and Boden's very useful online CV she's left us, no doubt not least for this time.
I had several themes I tried to squeeze into those 900 words. I get to them more fully below but briefly, of course her work and life. Second, her broader understanding of AI and Cognitive Science, and how generative AI was just a fraction of that. That second theme is still visible in Nature. The third – which was largely expunged – was something about difference between the more British/European polymath, scientific understanding of our discipline, and the present, more US engineering-led take that's led to the context presently dominating global headlines.
The first paragraph of my original draft was:
The world lost one of the few-remaining members of the founding generation of Artificial Intelligence (AI) research last week. Margaret Ann Boden (26 November 1936 – 18 July 2025) was the sort of polymath that Europe and particularly Britain has long believed the AI discipline mandates. She was particularly famous for her ground-breaking work on computational models of creativity. The abstract of her seminal 1998 paper “Creativity and artificial intelligence” in the field’s then-leading journal, Artificial Intelligence, is so short and clear it can be quoted here in its entirety:
Creativity is a fundamental feature of human intelligence, and a challenge for AI. AI techniques can be used to create new ideas in three ways: by producing novel combinations of familiar ideas; by exploring the potential of conceptual spaces; and by making transformations that enable the generation of previously impossible ideas. AI will have less difficulty in modelling the generation of new ideas than in automating their evaluation.
That final sentence – in classic British understatement – is now terrifyingly manifest in the swirling AI-generated “misinformation” plaguing everything from students’ essays to legal opinions to the latest, DOGE-built US government software. We should be very afraid of anyone who, unlike Boden, doesn’t understand how creativity fits into an overall psyche.
Somehow Nature didn't let that fly. The editor changed "British" to "scientific" in the context of understatement, which is interesting.
Of course, I didn't expect that DOGE sentence would get through exactly as it was. I was however used to great but somewhat activist editors at Wired and Science, and expected to get advice back, maybe a newly, heavily edited version, and to put some more hours into rewriting it again within the next 24-48 hours (I'd assumed we were writing for their Thursday edition that week.) Instead I heard nothing for ages, got some questions, and eventually received back an uneditable PDF with a lot of passive voice in it about a week later. Maybe I should have spent more time on figuring out tools and then done a full in-place revision in the PDF like I would have for Wired. As it was, I painstakingly scraped and corrected a bunch of it by email. The final product actually isn't as bad as the intermediate draft made me fear.
But I don't really want to write about Nature's editing strategy; I'd rather write about Maggie herself. I wish I'd written something as beautiful as Liad Mudrik's Nature obituary for Dan Dennett. I knew Dennett both as a person and as a set of work better than I knew Boden, but I could never have written something like what Mudrik wrote about either of them.** I felt obligation to the people who loved and admired Boden, and to her work – and I wanted to get more people to read her work. But reflecting on what I knew about her, I thought she would want my number-one concern to be doing something with this opportunity. To make people think twice about what they thought they knew about AI. Her job had been to produce her work, and she'd done a great enough job at this to get an obituary in Nature. My job was to address things like the other nonsense Nature wound up publishing at about the same time like a call for a new ethics because AI was "becoming agentic".
As I said on
LinkedIn:
I absolutely abhor ethics-sink language like "As AI agents become more autonomous." Who is reducing human agency and oversight? It isn't "just happening" nor is it attributable to AI itself. There are here and there really good points in this piece, which at least tries to make "developers" responsible agents at times. I'd like though to see a lot more mention of the companies employing the developers, and of whoever is profiting from the digital services they produce. There are also a lot of assumptions in the section on social outcomes about the "inevitable" paths of present human addictive behaviour, as if we have never regulated the way out of such problems before, at least for most members of our society.
These programmers seem to lack a model of how our societies create the structures that defend us. It isn't by giving everyone the first thing they want. No one said "hey, let's pay a lot of tax and work a lot of hours, people seem to find those things fun." We figured out what it took to defend ourselves, to give ourselves stable, rich, sustainable lives in the long enough term to do important human projects like build homes, businesses, and families.
We do NOT need a new ethics, at least, not to just "recognise" that some people have hacked some addictive technical systems together and are creating a host of new problems. We need to continue maintaining human centring, including human responsibility for harms human corporations produce.
Here's my thinking about what Margaret Boden represented that people needed to hear:
- An accessible, engaged, public intellectual who kept journeying to the Houses of Parliament whenever she might help set policy well into old age, despite challenged mobility.
- Someone who understood that creativity is just a small part of the complex that produces intelligence, and intelligence is just a part of the complex that produces a human, and humans owe each other something special.
- Recently-living proof that people have been working on AI since the 1950s, not all of the field's founders were American, male, engineers. And indeed, AI being a branch of (only partially-natural) philosophy, had different US and European traditions, each with significant value. Ancillary to this, that the UK government is presently undervaluing and undermining what made its own AI great.
- A sharp, clear, polite yet forceful, great, female intellect.
Let's go through those in reverse order
A sharp, clear, polite yet forceful, great, female intellect.
I really, really did not know what to do about the "female" part.
I never know what to do about being a "feminine" academic. I did not want anyone to think Boden was famous for being a woman. Women have
more trouble becoming famous. I didn't want people to think I liked her work or her because she was a woman. I'm as unlikely to cite women as the next person in our culture. I didn't want to mention gender, but at the same time I couldn't think about her without being aware of her gender and her female presence. And I've become very aware that the failure to mention minority or gender status actually disempowers those who are supposedly empowered by this levelling and fairness. Without challenging standard assumptions, they continue to apply.
So I focussed heavily on Boden's rocketing academic trajectory and achievements. That she had degrees and even a PhD in biology, acquired after she was a full professor. And I didn't mention her gender once (other than in pronouns) until the second from last sentence of my first draft:
The mention of her children and grandchildren in the short announcement of her death by The Argus was the first time I ever knew she must also have found time to be a mother. For the rest of us, she was a powerful, creative, active academic, with clear vision, passion, and a willingness for intellectual risk – indeed, a passion for the most difficult and individually consequential problems.
Neither of those sentences made it past Nature's first edit either. But in the final version, they did let me retain my own academic title, which is unusual for them.
 |
Boden and I role-playing in 2010 |
 |
Better shot of me... |
Pictures from David Martin's "slideshow" of the UK's 2010 Robotics Retreat, where we innovated the first national-level AI soft law, the Principles of Robotics. I love that Boden was the CEO of the EPSRC, and I was the Defence Minister – I didn't remember this particular exercise. It was a three day event, not counting travel, held in New Forrest for some reason. An earlier version of this post said "(note) only gender diversity" but really there were disciplinary, sectorial, even class differences. But not race/ethnicity.
Demographics and branches of 20C AI
A lot of people assume the US has always led at AI. By most semantic definitions of the term, Boden had been in the field, in fact
joined the field the same year (1956) as
the Dartmouth Conference that established the term 'AI'. How have her and her mentor Margaret Mastermann's voices been neglected in present debates, despite those debates hinging so heavily on creativity and semantics – the main components of LLMs?
Shortly after I finished my PhD in building real-time, human-like AI; Marvin, Push, and I were all at a meeting Aaron Sloman held in Birmingham (at AISB) called "
How to Design a Functioning Mind" – or originally, just Design A Mind, and everyone called the meeting DAM. It looks like Ben Goetzel, Geoff Hinton may have been there too – I certainly remember David Lodge and Carl Frankin. I'd be surprised if Boden never turned up; it was kind of a big deal for Minsky to hang out in the UK, and AISB encourages cross fertilisation between symposia (the US AAAI's Spring & Fall symposia series do the opposite.)
Years later, I read the note on that paper listing participants and noticed that no women were invited except two business women. Year's after that, I learnt about
the connection between the island, St. Thomas, and Jeffrey Epstein. I just now confirmed – well, here is the last line of the acknowledgements: "This meeting was made possible by the generous support of Jeffrey Epstein." (Yes, I still need to fully read
Gebru & Torres TESCREAL paper. I plan to before semester starts.)
Richard Stallman got in a lot of trouble for saying that Marvin Minsky probably didn't know the age of statuary rape (which indeed varies by US state) when he committed it,
a situation Stallman then worsened by trying to explain his reasoning by email. Patrick Winston, a much earlier Minsky student who was head of the MIT AI Lab when I was there in the 1990s (and once told his research group while I was visiting that he hated Texas and never wanted to visit it again), died in his bed at age 70 during Epstein's final period in jail.
I learnt a lot from Patrick Winston, and I liked him. I learnt from and admired (despite having issues with) Stallman and Minsky too.
After
Rod Brooks had dumped me as his PhD student, I went and talked with a bunch of the greatest AI and ML minds at MIT in the process of looking for a new supervisor, which was super interesting. I eventually wound up getting supervised by Gill Pratt next (I finished under Lynn Andrea Stein, since she had funding.) But the two hours I spent talking to Marvin Minsky in his office one evening were amazing. I wish we had smartphones and I'd recorded them. At the end of the conversation, he said "I think you're an excellent PhD student and we should do something for you. But I'm a terrible supervisor, and I have no money." So he sent me to talk to a bunch of his associate professors, and I nearly wound up working with Pattie Maes.
But getting back to Britain, one of the things that struck me about
Margaret Boden's discussion with Jim Al-Khalili for the BBC was all the reading she had done as an under-challenged schoolgirl before going to Cambridge for her undergraduate degree in medicine. It was something about the way she said "and Russell, of course." My partner, also a philosopher, was reading Russell's biography when we were first dating and told me about it, but I hadn't realised what a complete whacko, or genius, Russell was until Yannis Theocharis gave my partner
Logicomix: An Epic Search for Truth, which I strongly recommend to everyone who cares about AI and doesn't mind the graphic novel format (I love it.) Like
Oppenheimer, it gives you a look into the lives of male academics that even women academics don't see very often. For me, those and
Chip War really spelt out a lot of narratives everyone around me seemed to know more about than I did.
To again quote myself:
I asked a question about what had changed about being an intellectual in the last 100 years, why were there no longer these magazines, how could there be logic now beyond a super genius like Ramsay.
Blackburn answered that the change was the outcome of the Cambridge school's obsession with logic. While one part of philosophy had gone down the fork into what mattered to people – Sartre and Foucault – the other in its obsession over the foundations of knowledge had generated what is creating the era and challenges we live in now – Computer Science.
This points out that this isn't just a US vs Europe characterisation of how AI should work. Cambridge was at odds with Edinburgh, Sussex, and the French. Well, Margaret Boden was a big part of what made Sussex Sussex, and honestly, Aaron Sloman helped make Birmingham kind of hybrid between Cambridge and Sussex when he migrated from Sussex to Birmingham. But what the world needs now is more empowerment of people who think like Margaret Boden, and her early mentor
Margaret Mastermann, (whose wikipedia page sounds a
lot like Margaret Boden's writing, come to notice.) Which indicates even Cambridge was at odds with itself.
My
Nature obituary of Boden – and before it, Boden's BBC interview – make the point that Mastermann's and Boden's vision of semantics was entirely against the MIT / Chomsky crowd who dominated academic discourse from the 1950s-1980s. Now those two women's vision has been largely validated by large-corpus linguistics, which as part of the generative machine-learning revolution, came to dominate first Web search and now chatbots (yes, this is LLM, this is chatGPT.) No amount of academic influence beats AI that actually works. Mastermann's semantics derived in part from work by and with Cambridge botanist
R. H. Richens, who before entering computational linguistics had revolutionised botanical taxonomies, including by collapsing what were previously considered to be separate species. Many of his taxonomic advances were reversed immediately after his death when he was no longer able to defend them, but ultimately proved to have been correct once DNA evidence could be used.
Edinburgh's Department of AI was founded by Cambridge neuropsychologist
Richard Gregory, Oxford biologist (and, long-secretly, war-time cryptographer)
Donald Michie, and Cambridge theoretical chemist
Hugh Christopher Longuet-Higgins. I met all three, but only really spoke at length (and more than once) to Gregory. But the point is, really transdisciplinary
science; very far away from the engineers and logicians of MIT and Cambridge mathematics.
One of the things I tried to squeeze into the 900 words (and perhaps shouldn't have) was that the British government had been for years undermining its global leadership in the European, polymath science-based style of AI that we really need more input from now. Boden's COGS (praise of which did make the final
Nature draft) was shut down years ago, though it looks like now
it's been rebooted. The longest-running AI conference globally, that of
the Society for the Study of Artificial Intelligence and Simulation of Behaviour (AISB, per my Facebook obituary, but which did not make the final
Nature cut, but never really fit in the obituary), has been disadvantaged for decades by the UK's own research council's refusal to recognise that a domestic conference can be world-leading, and the consequent deprecation of travel funding to it. I personally attended AISB every year for years, even when I was attending MIT and living in Massachusetts.
I ran their annual meeting at Bath in 2017. Our theme that year, incidentally, was 'Society with AI.' 1/4 of
their present committee are my former PhD students.
Creativity is just a small part of the complex that produces intelligence; intelligence is just a part of the complex that produces a human; humans owe each other something special
I didn't say this as eloquently as I'd like, but this did make it into the
Nature obituary a bit, and also, we did mention and now I believe have linked to Boden's excellent
Aeon article ‘
Robot says: Whatever’.
An accessible, engaged, public intellectual who kept journeying to the Houses of Parliament whenever she might help set policy well into old age, despite mobility challenges
There were so many posts on bluesky and facebook by people sad to have never met her. I met her so many times! Starting from being a student. This is why it's a crime that the British don't support their own excellence; more students and other academics should have been able to travel to the meetings she keynoted. But I hope the Nature obituary did communicate effectively her ongoing engagement not only in academia (I still can't get over her taking a biology PhD at Cambridge after having become full professor and head of department. Maybe I'll do another PhD in retirement...)
When I first started working on this third obituary, I said I would expand it 'to include at a minimum a recounting of the time she, Sherry Turkle, and I were the "conservatives" claiming that humans were more than just our ideas, so AI could not be us.' That event was in 2007, in the context of the meeting at Oxford convened by Yorick Wilks that turned into
the book that includes the chapter
Robots Should Be Slaves' that was my first AI Ethics paper that anyone bothered to read (I wrote
Just an Artifact in 1996, and
A Proposal for the Humanoid Agent-Builders League (HAL) in 2000.)
There isn't much to say. We were in a posh, wood-panelled room (I think, or maybe that was dinner) somewhere in Oxford, there were about 30 of us reading the papers that came to be our book chapters, and at some point Maggie, Sherry, and I were standing up at one end of a table shouting at (and being shout at by) everyone else (including Luciano Floridi – the author list of the book is a fairly but not completely accurate record of who was there.) I was astonished by the scene, and at myself to be siding with these two older women – and to be the
only one siding with them. Yet every day I am more sure we were right. And, as the
Nature obituary said, this concept of human centering of AI is now in the British Principles of Robotics (2011), the OECD Principles of AI (2019), t
he UNESCO recommendation on the ethics of artificial intelligence (2021), and is where I live, in the EU, actually law, thanks to our AI Act (2024).
Yorick Wilks by the way also came out of the same lab as Mastermann's laboratory, and he wrote her biography a few times. He died in 2023.
Leftovers, and a Postscript
I want to thank Ezequiel Di Paolo for surfacing the excellent picture of her I use here (which he did on his own facebook post.)
I found this quote last week: "One can be, indeed one must strive to become, tough and philosophical concerning destruction and death…But it is not permissible that the authors of devastation should also be innocent. It is the innocence which constitutes the crime." – James Baldwin, "The Dungeon Shook," The Fire Next Time. Among other things, it reminds me of people trying to tell us that AI is and must be responsible for itself. Whether they believe that or not, they are part-authors of any devastation anyone wreaks with AI.
Today [12 or 13 August] I am working through a paper I'm writing on moral agency in governments [finished 15 August] and I suddenly reflected on how much it mattered that the US narrative is dominated not even by particularly good engineers, but by (admittedly decent) programmers who were also entrepreneurs. I came to AI because it was interesting, but more because intelligence was interesting and I just happened to be exceptionally good at programming. So I leveraged my strengths in programming to get into the best university programs I could, despite having mediocre undergraduate grades (though from a great school.)
I often get asked how I knew to go into AI so early. Honestly, on some "imposter syndrome" level I think it is luck, that there are scientists distributed over all sorts of interests. But on another level, what else could possibly be as interesting or important as intelligence – as understanding ourselves?
Boden and I both thought that. Why then did we wind up working so much with governments and governance? Why do China and the EU have AI regulation and the US not? How is it not evident that passing agency into mechanism is of enormous legal and moral concern? I'm not sure, and I'm up against a deadline, but I wanted to remember I have these questions, so here they are. But always in the back of my mind is the fact that the US was founded by fundamentalist whackos, who have always seen the world as black and white (sadly in more ways than one.) Maybe it just seemed self evident to enough people with enough power that only (white male) humans were really conscious, intelligent, or other codewords for "interesting," and thus AI was only ever really the domain for engineers. Now even from strictly a domestic US perspective, disproportionate power has shifted to those engineers because of inadequate attention to wealth.
Having now finished the third obituary (17 August), on reflection I think it is more about money and power than fundamentalism or even sexism.
- A fourth: a sweet short one, by one of her undergraduate students, Richard Dallaway.
- Curious, I looked into the history of Dennett and Boden writing about each other. They both seemed to admire each other. The first piece I found of Dennett citing Boden was this review of her book "Artificial Intelligence and Natural Man" from 1977. Boden first mentioned Dennett about a decade later, but did so with approbation.
- ** There's also a New York Times obituary by Michael Rosenwald that does mention details of her life that I didn't know, for example that she wasn't competent to have played with the chatbots that have emerged since 2022. I haven't worked in the UK since 2020 so hadn't noticed she was no longer attending meetings.
Comments