|Florentina Pakosta exhibit at the Sprengel Museum in Hanover|
In December I was asked by actually two smart bureaucrats to come up with a "one pager" on AI regulation. It's not really one page, and for some reason it's structured like a novel. I'm like that sometimes.
This will probably come out again in a more academic form once or twice in 2019. Happy New Year, may we get our capacity to govern back together sooner rather than later.
- The most important things to realise about AI policy is that AI changes everything and nothing.
- Everything because all of human
conscious behaviour (and much of what is implicit) is based
around our computational advantages. These advantages are:
- being highly cognitive (good at computing new intelligence individually), and
- being very good at communication (being able to share and consolidate best practice.)
- Nothing, because the most fundamental problems are still with us. Fundamental problems are sustainability (how big of a pie can we grow?), and inequality (how do we share the pie?) Do we invest in the present or the future? In the individual, the family, the region, the country, the world? Good answers aren’t always obvious, for example empirically the ideal gini coefficient is .27; intuitively this means that you need to reward good efforts or fund good ideas, but also to enable society more broadly and avoid creating undue separations and resentment.
- An example of how AI does not change fundamental problems: do we make teachers twice as effective or half as expensive? Technology may enable either, but the policy problem is fundamental.
- The most essential thing to understand about AI is that intelligence is just the capacity to do the right thing at the right time. It is a form of computation. Computation is a systematic transformation of information: a physical process requiring time, space (memory), and energy. Intelligence is that computation which transforms information about the state of the world–perception–into action.
- The second most essential thing to understand about AI is that it is intelligence that was built deliberately; it is an artefact. Nothing about intelligence changes the fact that constructing and operating an artefact are both deliberate actions for which humans and human agencies must maintain responsibility. Otherwise, some humans or human agencies will behave badly.
- Part of the reason we must hold humans accountable is because only humans can be dissuaded by human justice. We often mistakenly think that justice is largely about compensation, but in fact not every wrong can be righted, so wrongs must also be prevented by disproportionate dissuasion for those wrongs successfully addressed.
- Note in particular that the benefits of legal personhood even for some human organisations are outweighed by the costs of the corruption it facilitates. We often call corporations in this situation “shell companies.” A strictly AI legal person with no humans to be dissuaded by its malfeasance would be the ultimate shell company. [citation]
- Fortunately, it is perfectly possible, in fact fairly easy, to maintain accountability for AI systems.
- This is not to say that it is easy to maintain accountability for all AI systems. Rather, it is to say that there are procedures we should mandate for legal commercial products and AI system components of legal actors.
- Good practice is already demonstrated in the automotive arena. That’s why the entire planet has known within a week after every fatal accident involving an autonomous car exactly what the car saw, how it processed what it saw, why it was set up to process what it saw in that way, and so on. This is because the automotive industry is already well regulated.
- Good practice in AI comes down to logging -- that is, maintaining records about procedures followed in the development and operation of the AI system.
- We have to be able to trust these logs. This means the records and the systems they reflect must be cyber secure.
- Cyber security and AI are inseparable. You need AI for cybersecurity, and cybersecurity for trustworthy, accountable AI. You cannot treat one of these as only a security problem or weapons system, and the other as only an economic answer. They are inseparable.
- Logs should be kept of:
- Software development. Who changed what line of code when, and why. This is standard in software engineering, there are tools for this.
- Software libraries and other such resources used, and their provenance.
- Data libraries used for machine learning, and their provenance.
- Machine learning training procedures, who did what when and why.
- Machine learning training parameters, who changed what when and why.
- Testing procedures: how they were determined, how they were/are conducted, what the outcomes were/are.
- Some testing should be done prior to release, some should be done continuously during operation. That’s why there are two tenses on the previous point.
- What the acting system perceives, what actions it takes, and why.
- Logs of live systems will include a great deal of personal information concerning not only intended users but also others in the environment of the system, so such logs should be eliminated routinely and on as short of a time frame as is safely possible. Think again of the example of the news stories following autonomous car fatalities.
- The point of accountability is that the producers of AI systems should be able to demonstrate due diligence and conformity to the state of the art for ethics-related procedures.
- Note that logging provides for the capacity for explanation at the level of abstraction of how the machine was built and how it operates. This is considered sufficient level of explanation for when we audit e.g. banks. We don’t ask about how the synaptic connections inside bankers’ brains, we ask to see the accounts.
- One fundamental difference in the present information age not mentioned enough above is privacy.
- We no longer have anonymity through obscurity. We have to defend personal privacy like we defend private property; not because the state cannot get into a house, but because it would be wrong for a state to be there, and would lead to a rejection of that state.
- Without personal privacy, individuals will be forced to conform. Conformity leads to fragility, a collapse of innovation. We need variation in society to have a robust society. This is Biology’s Fundamental Theorem of evolution (Fisher 1930).
- Of course, too much variation leads to entropy and a loss of the “good tricks” already discovered. This is the other extreme outcome of Fisher’s Fundamental Theorem.
[Sorry, there was a typo in #5 for six months that said "energy" redundantly where it should have said "information", now highlighted in red. Since March 2019 there is also a publicly available, longer, fully cited version of most of the assertions here.]