This is about my papers concerning the propensity of people using AI to actively avoid responsibility. There's a lot of discussion of "responsibility gaps" as if AI is a new disease or species we've discovered and has some kind of fixed outcomes we need to discover, rather than that it's a technology that there are right and wrong ways to design and use. This could have been tweets, toots, or skypes, but I hate long threads
- Although my most cited paper allegedly in AI ethics is about AI bias, that's not what I work on. In fact, that paper came out of my research programme on semantics and origin of language. I for one entirely agreed with the Science editor characterising that paper as Cognitive Science, though Arvind and Aylin were surprised, and I think Arvind is still a little confused about that.
- My original concerns were vague – it seemed evidently bad that people were mistaking artefacts for people, and I was a keen PhD student, so I just wrote about it. I wrote a series of three papers, the first two of which almost no one read, then I thought of a title and people at least read the title though often not the article. You can find those early conference papers and book chapters here, particularly sections 1 & 2.
- My AI Ethics research programme has basically taken this shape:
- Just asserting that AI isn't people or humanlike. 1996-2010 (still some.)
- Demystifying consciousness and emotions and intelligence. 2000-2012 (still some.)
- Helping regulators understand devops and transparency. 2010-present.
- Helping demonstrate harms from corporate power / market concentration. 2020-present.
- The best article I wrote about this from a legal perspective (because coauthors) is: Bryson, Joanna J., Mihailis E. Diamantis, and Thomas D. Grant. "Of, for, and by the people: the legal lacuna of synthetic persons." Artificial Intelligence and Law 25, no. 3 (2017): 273-291. This is the one that says "AI legal personality would be the ultimate shell company." I often say it's the most important paper I wrote in 2017, even though it gets 1/10th the citations of the semantics one.
- The best article I've written about this from a philosophical perspective (because coauthors) is: Evans, Katie D., Scott A. Robbins, and Joanna J. Bryson. "Do we collaborate with what we design?." Topics in Cognitive Science 17, no. 2 (2025): 392-411. This is a bit more on the individual / corporate, labour / capital end of deception.
- The most impactful thing I directly wrote about this was probably the UK EPSRC/ARC Principles of Robots, which shifted Asimov's 3 laws (which make robots responsible) into 5 laws: 3 about developer and 2 about owner/operator responsibility. I say this is impactful because the OECD Principles of AI are almost identical, and heavily influenced the UNESCO Recommendation on AI Ethics and the EU AI Act. The UK published them online in 2011, but took them down a couple years ago, but fortunately there was a fifth anniversary special issue that printed the full document: Boden, M., Bryson, J., Caldwell, D., Dautenhahn, K., Edwards, L., Kember, S., … Winfield, A. (2017). Principles of robotics: regulating robots in the real world. Connection Science, 29(2), 124–129. https://doi.org/10.1080/09540091.2016.1271400
- My most read (by an order of magnitude) blogpost also points out that I'm worried all the fru fra about our discovery of bias being systematically uploaded ignores the fact that some people actually, culpably choose to write biased AI. Three very different sources of bias in AI, and how to fix them
- Less read but still cool (but more sciency about what just changes, less about malign intent) Bryson, Joanna J. "Artificial intelligence and pro-social behaviour." In Collective agency and cooperation in natural and artificial systems: Explanation, implementation and simulation, pp. 281-306. Cham: Springer International Publishing, 2015.
Comments