The use of automated decision-making tools and distributed ledger technologies to improve quality and quantity of information to consumers
The use of automated decision-making tools and distributed ledger technologies to improve quality and quantity of information to consumers
A statement by Joanna J. Bryson to the European Parliaments Committee on the Internal Market and Consumer Protection (IMCP) for their meeting 27 September 2021, you can watch it all here.
bold face is late additions
I am going to take this question into three parts. First I will address the question of trust and transparency generally with emphasis on the digital era. Then I will briefly address what each of automated decision making (ADM, or more precisely, artificial intelligence – AI) and distributed ledger technology [DLT], add to this question.
We now live in a digital era and to be honest I don't think this fact receives enough genuine celebration. More people have access to more information and other forms of empowerment than ever before. A situation which is sometimes challenging but I am happy in this panel to be focussing on one of the bright sides, the capacity to serve consumers veridical information such that they can make more informed choices.
When we think about transparency, we need to realise that no one – and no thing – can have the time and computational capacity to know everything. So access to information must be at least initially abstracted. Perhaps the most transparent form of abstraction is hierarchy, so if there is something you wish to know more about, you can "drill down" – on the internet with a simple click. So for example if you want to understand why a robot is doing what it's doing, or what are the ingredients in a food product, you may first find a record for the robot or food item, then find a list of its components, then examine more information about one of those components, e.g. its supply chain.
Whilst no one individual has time to read or understand all of human knowledge, the more of the eight billion of us who have access to the Internet (and to good translation technology) the more likely it is that the light of transparency can uncover problems. Having said that, it is also both efficient and better from the perspectives of trust, accountability, and fairness to have paid employees of an executive force who audit this material proactively, not only in response to consumer complaints. I have seen lately too much enabling of the use of "crowd sourcing" to avoid hard questions of government accountability and revenue. Transparency is next to useless without accountability, and accountability must be enforced. That is a large part of what governments are for, it is essential to national security.
AI (as I use the term here) refers to any software system that produces an action in response to context. And may I say with respect to Mr Gozi's earlier question about transparency, I urge the EP to clarify in both the AIA and the DSA that software is a manufactured product for which both manufacturers and also owners/operators should be held to account in standard ways. We frequently hear that the AIA only alters regulation for hazardous AI, please make explicit under which rules the rest of AI should be governed. In my opinion, this would go a long ways towards motivating transparency – in order to demonstrate due diligence. Returning to AI, With such a broad and effective definition as I offered, we can avoid questions of whether a decision is made, or whether machine learning is used to produce the association between context and action. Neither of these are particularly relevant to the governance matter at hand. The point is that someone has produced a software system that replaces or enhances simple mechanisms such as hierarchy so that a consumer can hopefully more quickly find the information they need. If the system works well, then the benefit is that more people can find what they need to know effectively and efficiently. Whether it works well or badly, AI systems are another category of things that needs to be documented and audited, so we can verify that they are working accurately and fairly, and have been produced with the best of intentions and effects, following due diligence and best practice. Fortunately, maintaining digital records for the production and operation of digital artefacts is actually fairly inexpensive and indeed is a part of good practice, which hopefully will become an enforced obligation under EU law with acts such as the DSA and AIA. Though in my opinion there should be some act that clarifies formally and legally that all software is a manufactured product (even where it is used to offer a service) and as such subject to ordinary product and liability law.
Of course, any digital artefact is only as reliable as its cybersecurity. This is where distributed ledger technology comes in. I want to be very clear – records preserved with such technology are no more true or useful than if they were not so preserved. There has to be an auditable process to ensure their validity. But such technology can serve a role in ensuring the permanence and cybersecurity of the records essential to transforming transparency into accountability, and ultimately, deserved societal trust. However in contrast to Mr. Scharmann I am very skeptical that the libertarian vision of a completely unsurveilled, and therefore intransparent and ungovernable dataspace, should be our goal. I think we've seen enough of that with cryptocurrencies. We should use DLT only in the service of transparency, including government audits. We are not necessarily safer when we are "social" only amongst an odd elite.
Responses to Q&A
- The problem of governing–particularly of defending at least the rights of EU citizens and residents–in a context of powerful transnational corporate entities is a serious one, but it is not limited to the digital sphere. We have this problem also in finance, pharma, petrol-chemical, consultancies, and so forth. I am actually skeptical of the DMA (which no one was mentioning) and why an exception is being made for the digital rather than addressing the harder questions of transnational regulation, including (but not limited to) redistribution / infrastructure investment.
- The digital divide is an issue, but actually for many elderly AI makes the Internet more accessible, not less. As typing and seeing become more difficult, speech technology opens a world of opportunities. However, this is only true for those able to speak, and those who speak languages that are not only economically viable to support, but for which enough data exists to produce such technology. If there are for example languages the EP doesn't bother to translate their statements into, where will AI firms source their translation materials? Digital literacy is not that different from ordinary literacy in this respect. Here again the problems of AI are broadly the same as historic issues for minority rights. AI changes the landscape some, but not entirely and not fundamentally. We cannot demand omniscience from AI; omniscience is not computationally tractable, and would anyway violate other fundamental rights and freedoms if it were.
- How do we ensure there are no unfair asymmetries in data and AI? see the previous question, but here are three suggestions for achieving continuous improvement:
- capture and disseminate best practice, just as with any other manufacturing technique.
- allow citizens to report and where possible even address asymmetries.
- create (and fund) agencies to proactively "sniff out" such problems.
- To reiterate (in response to a question) wherever possible use standard product law and product liabilities. These include acknowledging the responsibilities of the owner/operator, and again standard tensions about how much product misuse can realistically be defended against. Again, AI doesn't suddenly produce a system that can magically defend itself. But digital technology does change associated costs so may shift some of the traditional levels of tradeoffs between manufacturer and owner/operator responsibility. These costs will however be determined per product.
- Similarly DLT will not magically improve auditability. It is just one mechanism to make it more likely that the records once recorded are secure and stable, but it does nothing to ensure that what those records record is veridical. It's just a better kind of paper to write on.
- I'm afraid I haven't much to add on the defense of culture. Digital technology makes replication and transmission unbelievably cheap. Again I don't really believe that DLT alters this. My only recommendation is that we emphasise live experience such as performance and collective activities, which anyway are more useful for building localised sense of community which is important to security and well being.
Hybrid testimony FTW! Sorry that I couldn't capture myself as I was busy when I was speaking...the committees' video doesn't show how the room had us on a screen while we spoke. |
Comments