When I (was ?) build(ing) AGI

Unusually, there's been a little discussion of the fact that I actually program AI, that my PhD is in systems engineering of human-like AI, and that I actually signed up for what now would be termed an AGI project back in 1993 when I joined Rod Brook's Cog project. In fact, it's come up three times, once in a magazine article and twice in twitter threads.

Here's the magazine article, Artificial general intelligence: Are we close, and does it even make sense to try? It's by Will Douglas Heaven, who phoned up and asked the interesting question about why some of the highest-profile tech geniuses actually believe in something as incoherent as AGI. Cf. on AGI's incoherence my 2018 blogpost Living with AGI, though Will came to me because of my older post on an older usage of the term AGI.) I hope it's OK if I pull a couple of the parts with my quotes for those without access (though you should all subscribe, Tech Review is a great journal! I'm leaving some great lines un repeated here...)
  • Talking about AGI was often meant to imply that AI had failed, says Joanna Bryson, an AI researcher at the Hertie School in Berlin: “It was the idea that there were people just doing this boring stuff, like machine vision, but we over here—and I was one of them at the time—are still trying to understand human intelligence,” she says. “Strong AI, cognitive science, AGI—these were our different ways of saying, ‘You guys have screwed up; we’re moving forward.’”
  • The hype also gets investors excited. Musk’s money has helped fund real innovation, but when he says that he wants to fund work on existential risk, it makes all researchers talk up their work in terms of far-future threats. “Some of them really believe it; some of them are just after the money and the attention and whatever else,” says Bryson. “And I don’t know if all of them are entirely honest with themselves about which one they are.”
  • Many people who are now critical of AGI flirted with it in their earlier careers. Like Goertzel, Bryson spent several years trying to make an artificial toddler. 
Here's a transcribed twitter thread by me from 11 October:
It's a little annoying to get name checked but not cited in a paper, but this is really a great review by Beth Singler The AI Creation Meme: A Case Study of the New Visibility of Religion in Artificial Intelligence Discourse. For what it's worth, I do have two papers engaging directly with AI & religion. (I've also written one higher-profile article on its scientific study, The role for simulations in theory construction for the social sciences: case studies concerning Divergent Modes of Religiosity (open access for article and discussion.) There's Building Persons is a Choice, a commentary on a religion & AI target article by Anne Foerst that it's too bad Beth missed (but unlike my commentary, it's hard to dig out!) The other one (also obscure) is a credit to the @UniofBath's excellent Centre for Death & Society. It's called Internet memory and life after death, on AI & (im)mortality (open access PDF). 

Contra what some say about me (Singler cites one, uncritically), I don't think humanity is the apex of anything EXCEPT (sort of) itself – of humanity. I'm fully behind broadening our moral concern to the sustainability of the ecosystem, which is a nice but I think ad hoc thread of posthumanism. But my claim wrt AI is that it's incoherent to think we can address any widely-held values by replacing ourselves with artefacts. I came to AI as one of those who wanted to build a better friend/collaborator/offspring. I still feel that emotional draw, but my research found it to be incoherent. Worse, I see correlation between such desires & the dismissal of the needs & rights of the full range of humanity. For a long time I thought the various views on the project of human-like AI could and should coexist, but then I saw anthropomorphism was being used to disrupt regulation not only of AI, but governance, democracy, finance etc. So I'm being militant for now. I entirely intend to retire my militant stance when the war against regulation & democracy is over. I hope I end my research career back in biology / behavioural ecology.  I trust philosophy will go on discussing what it means to be human for thousands of years to come. 

And here's a transcribed twitter thread by me from 12 October:

[10 am] It occurred to me this morning that today is the day Rod Brooks promised me would come in 1994. Thanks Jen Haensel! (Hadn't expected it to come while I was working in a governance school though.)

[7:30pm] Well, I’m late to dinner, but there’s a humanoid robot running some version of my AI software — Behavior Oriented Design — in my office!

picture of a pepper robot on the left & a postdoc on a desktop computer on the right

On my desktop is Jen Haensel helping out on her last day working on the @AXAResearchFund #anthropomorphism project before she goes to her new postdoc at Stanford. Thanks also @stormUK & the three @UniofBath dissertation students who helped write the code (if you guys have web pages, please comment on this and I'll link them :-). 

I wrote Behaviour Oriented Design (BOD) to address the problem of programming Cog (an MIT #agi project run by @rodneyabrooks & @las21, briefly mentioned recently by @strwbilly in TechReview [see above].) I thought Cog's biggest problem would be attributing bugs between multiple PhD student authors, given we'd be using Rod's subsumption architecture (SA). I had previous experience debugging my own SA code @edinburghuni, and had industry experience with multi-programmer projects. 

BOD–or really, it's action selection / prioritising representation, POSH plans–has had the most impact in the games industry where it was seminal to Behavior Trees. But BOD is mostly about agile allocation of programmer time between writing conventional code (usually quicker for simple problems) & enabling the AI system to solve its own problems in real time. In games, a lot of the character AI development is done by different teams. Priorities that personify a character are scripted by writers, but detailed visual behaviours are crafted by graphics teams and visual artists. For robots, at least the software aspects of AI is ordinarily done by one team of developers, often using ML for part of the work. The physical interface to the environment is handled by mechanical engineers, which is more like the decomposition of problems we animals experience. Hopefully BOD pepper will help people understand AI better, and build better (and more transparent!) AI. 

Back to Rod's promise–he said we'd spend 6 months prototyping Cog's hardware, & then have a company fabricate one robot for each grad student, so we'd each have our own on our desk in 1994! It was a great idea that again got bogged down in preferring anthropomorphism over simplicity. If Cog ever got to the point where PhD students were finger-pointing on whose code had bugs, I was no longer there. But there are a LOT of peppers out there, so... https://github.com/stormuk/pepper

As always with intelligence, there's still a lot to do, and I have no idea when I will do it. The anthropomorphism project is on hold until we can interact with human subjects – which we could have done in Berlin these past few months, but the robots were quarantined in Bath. Wednesday, the day of that second Twitter thread, was when we finally got the robots to Berlin. Now from Monday, Berlin universities are at a higher level of pandemic alert. So we again can't have guests at Hertie School, probably until Spring. But maybe I will do a little programming over Christmas :-)

Comments