For some months now (starting the 13 April 2021) I've been giving a talk on the limits of transparency, inspired by both my experiences with and observations of the Global Partnership for AI (I'm one of Germany's original 9 nominees) and Google (both the recent Stochastic Parrots / Timnit Gebru / M Mitchell problem, and of course ATEAC.)
What I've said is that there are at least three limits to the extent of transparency:
- Combinatorics. For some reason, this week Rod Brooks decided to call this "information physics", but computer science is also a legitimate science, and normally in computer science we call it combinatorics. In the last decade, I've opened a lot of AI policy talks though saying that computation is a physical process taking time, space, and energy, and therefore we can never know everything, with or without AI. See for example minute 17 of my 13 April talk – I've been giving that particular slide for years). There's no question that this is a limit on transparency. It's a scientific and mathematical fact – we can't know everything, so what we do understand is always an approximation of reality.
- Political polarisation. I've had a published scientific paper about why and when political polarisation covaries with inequality since last December, and again I've been processing these ideas here for years e.g. my 2016 post on truth in the information age. Basically when we are more polarised we are more concerned with signalling our identity, probably because we are feeling threatened and know we can't make it on our own. This may make it psychologically harder for us to actually think and see truth. To be honest, this reason is pretty speculative.
- Mutually exclusive goals. This comes back to why leading AI and communication companies employing leading minds still can't seem to effectively reach agreements or communicate decisions. Given that all of us are dealing with abstracted versions of reality, what is the basis of that abstraction? Basically, we will compress information around the goals that we hold. If two people are hired (or otherwise deployed) with opposing goals, they may find each other incomprehensible, however smart they are. The best way to resolve the problem of multiple, conflicting goals in AI was actually one of the core deliverables of my PhD.
- Veridical information presented comprehensibly.
- A population able to recognise and at least partially comprehend such presentation.
- whether the damage was intentional and if so on whose part (the funder, the developing organisation, one individual developer, a hacker, an operator, the operating organisation?), or
- whether it was negligent and if so at what stage (development or operation?)