![]() | |
Prof Margaret Boden; Credit: Jay Williams (click through for source) |
Margaret Boden and Gaza
A few days after I posted the above here, Nature asked me to write an obituary for Margaret Boden. I probably should have turned them down, and I did initially, suggesting Ron Chrisley instead. I couldn't really see why they had suggested me other than sexism, but no doubt some AI system had noticed my blogposts and coauthorship(s) with her. Really the only the only thing we ever wrote together was the UK's 2011 Principles of Robotics policy "soft law" in a large group of people, but we are attributed as coauthors also of the 2017 archival journal version of the Principles, which is what I linked to there.
I thought that was it. Last I knew, Ron lived in California, so I thought they would contact him and I'd wake up to the news he was writing it. As it happened, I instead woke up at 3am with an idea, and wound up writing the 900 words they'd asked me for in about 3 hours, including time spent going through some of her publications and interviews, and Boden's very useful online CV she's left us, no doubt not least for this time.
I had several themes I tried to squeeze into those 900 words. I get to them more fully below but briefly, of course her work and life. Second, her broader understanding of AI and Cognitive Science, and how generative AI was just a fraction of that. That second theme is still visible in Nature. The third – which was largely expunged – was something about difference between the more British/European polymath, scientific understanding of our discipline, and the present, more US engineering-led take that's led to the context presently dominating global headlines.
The first paragraph of my original draft was:
The world lost one of the few-remaining members of the founding generation of Artificial Intelligence (AI) research last week. Margaret Ann Boden (26 November 1936 – 18 July 2025) was the sort of polymath that Europe and particularly Britain has long believed the AI discipline mandates. She was particularly famous for her ground-breaking work on computational models of creativity. The abstract of her seminal 1998 paper “Creativity and artificial intelligence” in the field’s then-leading journal, Artificial Intelligence, is so short and clear it can be quoted here in its entirety:
Creativity is a fundamental feature of human intelligence, and a challenge for AI. AI techniques can be used to create new ideas in three ways: by producing novel combinations of familiar ideas; by exploring the potential of conceptual spaces; and by making transformations that enable the generation of previously impossible ideas. AI will have less difficulty in modelling the generation of new ideas than in automating their evaluation.
That final sentence – in classic British understatement – is now terrifyingly manifest in the swirling AI-generated “misinformation” plaguing everything from students’ essays to legal opinions to the latest, DOGE-built US government software. We should be very afraid of anyone who, unlike Boden, doesn’t understand how creativity fits into an overall psyche.
Somehow Nature didn't let that fly. The editor changed "British" to "scientific" in the context of understatement, which is interesting.
Of course, I didn't expect that DOGE sentence would get through exactly as it was. I was however used to great but somewhat activist editors at Wired and Science, and expected to get advice back, maybe a newly, heavily edited version, and to put some more hours into rewriting it again within the next 24-48 hours (I'd assumed we were writing for their Thursday edition that week.) Instead I heard nothing for ages, got some questions, and eventually received back an uneditable PDF with a lot of passive voice in it about a week later. Maybe I should have spent more time on figuring out tools and then done a full in-place revision in the PDF like I would have for Wired. As it was, I painstakingly scraped and corrected a bunch of it by email. The final product actually isn't as bad as the intermediate draft made me fear.
But I don't really want to write about Nature's editing strategy; I'd rather write about Maggie herself. I wish I'd written something as beautiful as Liad Mudrik's Nature obituary for Dan Dennett. I knew Dennett both as a person and as a set of work better than I knew Boden, but I could never have written something like what Mudrik wrote about either of them.** I felt obligation to the people who loved and admired Boden, and to her work – and I wanted to get more people to read her work. But reflecting on what I knew about her, I thought she would want my number-one concern to be doing something with this opportunity. To make people think twice about what they thought they knew about AI. Her job had been to produce her work, and she'd done a great enough job at this to get an obituary in Nature. My job was to address things like the other nonsense Nature wound up publishing at about the same time like a call for a new ethics because AI was "becoming agentic".
As I said on LinkedIn:I absolutely abhor ethics-sink language like "As AI agents become more autonomous." Who is reducing human agency and oversight? It isn't "just happening" nor is it attributable to AI itself. There are here and there really good points in this piece, which at least tries to make "developers" responsible agents at times. I'd like though to see a lot more mention of the companies employing the developers, and of whoever is profiting from the digital services they produce. There are also a lot of assumptions in the section on social outcomes about the "inevitable" paths of present human addictive behaviour, as if we have never regulated the way out of such problems before, at least for most members of our society.These programmers seem to lack a model of how our societies create the structures that defend us. It isn't by giving everyone the first thing they want. No one said "hey, let's pay a lot of tax and work a lot of hours, people seem to find those things fun." We figured out what it took to defend ourselves, to give ourselves stable, rich, sustainable lives in the long enough term to do important human projects like build homes, businesses, and families.
We do NOT need a new ethics, at least, not to just "recognise" that some people have hacked some addictive technical systems together and are creating a host of new problems. We need to continue maintaining human centring, including human responsibility for harms human corporations produce.
- An accessible, engaged, public intellectual who kept journeying to the Houses of Parliament whenever she might help set policy well into old age, despite challenged mobility.
- Someone who understood that creativity is just a small part of the complex that produces intelligence, and intelligence is just a part of the complex that produces a human, and humans owe each other something special.
- Recently-living proof that people have been working on AI since the 1950s, not all of the field's founders were American, male, engineers. And indeed, AI being a branch of (only partially-natural) philosophy, had different US and European traditions, each with significant value. Ancillary to this, that the UK government is presently undervaluing and undermining what made its own AI great.
- A sharp, clear, polite yet forceful, great, female intellect.
A sharp, clear, polite yet forceful, great, female intellect.
The mention of her children and grandchildren in the short announcement of her death by The Argus was the first time I ever knew she must also have found time to be a mother. For the rest of us, she was a powerful, creative, active academic, with clear vision, passion, and a willingness for intellectual risk – indeed, a passion for the most difficult and individually consequential problems.
Neither of those sentences made it past Nature's first edit either. But in the final version, they did let me retain my own academic title, which is unusual for them.
Boden and I role-playing in 2010 |
Better shot of me... |
Demographics and branches of 20C AI
To again quote myself:
I asked a question about what had changed about being an intellectual in the last 100 years, why were there no longer these magazines, how could there be logic now beyond a super genius like Ramsay.Blackburn answered that the change was the outcome of the Cambridge school's obsession with logic. While one part of philosophy had gone down the fork into what mattered to people – Sartre and Foucault – the other in its obsession over the foundations of knowledge had generated what is creating the era and challenges we live in now – Computer Science.
Creativity is just a small part of the complex that produces intelligence; intelligence is just a part of the complex that produces a human; humans owe each other something special
An accessible, engaged, public intellectual who kept journeying to the Houses of Parliament whenever she might help set policy well into old age, despite mobility challenges
Leftovers, and a Postscript
I want to thank Ezequiel Di Paolo for surfacing the excellent picture of her I use here (which he did on his own facebook post.)
I found this quote last week: "One can be, indeed one must strive to become, tough and philosophical concerning destruction and death…But it is not permissible that the authors of devastation should also be innocent. It is the innocence which constitutes the crime." – James Baldwin, "The Dungeon Shook," The Fire Next Time. Among other things, it reminds me of people trying to tell us that AI is and must be responsible for itself. Whether they believe that or not, they are part-authors of any devastation anyone wreaks with AI.
Today [12 or 13 August] I am working through a paper I'm writing on moral agency in governments [finished 15 August] and I suddenly reflected on how much it mattered that the US narrative is dominated not even by particularly good engineers, but by (admittedly decent) programmers who were also entrepreneurs. I came to AI because it was interesting, but more because intelligence was interesting and I just happened to be exceptionally good at programming. So I leveraged my strengths in programming to get into the best university programs I could, despite having mediocre undergraduate grades (though from a great school.)
I often get asked how I knew to go into AI so early. Honestly, on some "imposter syndrome" level I think it is luck, that there are scientists distributed over all sorts of interests. But on another level, what else could possibly be as interesting or important as intelligence – as understanding ourselves?
Boden and I both thought that. Why then did we wind up working so much with governments and governance? Why do China and the EU have AI regulation and the US not? How is it not evident that passing agency into mechanism is of enormous legal and moral concern? I'm not sure, and I'm up against a deadline, but I wanted to remember I have these questions, so here they are. But always in the back of my mind is the fact that the US was founded by fundamentalist whackos, who have always seen the world as black and white (sadly in more ways than one.) Maybe it just seemed self evident to enough people with enough power that only (white male) humans were really conscious, intelligent, or other codewords for "interesting," and thus AI was only ever really the domain for engineers. Now even from strictly a domestic US perspective, disproportionate power has shifted to those engineers because of inadequate attention to wealth.
Having now finished the third obituary (17 August), on reflection I think it is more about money and power than fundamentalism or even sexism.
- A fourth: a sweet short one, by one of her undergraduate students, Richard Dallaway.
- Curious, I looked into the history of Dennett and Boden writing about each other. They both seemed to admire each other. The first piece I found of Dennett citing Boden was this review of her book "Artificial Intelligence and Natural Man" from 1977. Boden first mentioned Dennett about a decade later, but did so with approbation.
- ** There's also a New York Times obituary by Michael Rosenwald that does mention details of her life that I didn't know, for example that she wasn't competent to have played with the chatbots that have emerged since 2022. I haven't worked in the UK since 2020 so hadn't noticed she was no longer attending meetings.
- Another obituary to maybe aspire to, though agree about privilege.
Comments