Still under construction, but getting there. |
Basically it's a good document heading the right direction. I only clarified a few things.
A brief summary of highlights with links to blogposts where I've explained them in more detail.
- Explanation is actually easy / AI is never necessarily opaque
- Humans are always responsible, AI can only be transparent.
- AI isn't really some weird byproduct of data (the Rumpelstilzchen fallacy). Computation and cybersecurity are more essential than data storage for any length of time.
- AI is produced by programmers, so the EU needs to add program code, architecture documents and specifications to the list of documents they require companies to be able to produce for inspection.
And if you like the below post you might want to see what I previously told the EU's HLEG about their draft document, my formal Testimony for the The House of Lords Select Committee on Artificial Intelligence, or my testimony for the UK's All Party Parliamentary Group on Future Generations on AI and Future Generations.
Actual Submissions
- Here's the draft document we were all commenting on (with my annotations).
- Here's my answer to their questionnaire. The below was an attachment.
Contribution ID: bd94e0aa-1e99-4a9b-bf12-3cf3a78a4470
EU AI Whitepaper Consultation, June 2020
Joanna J. Bryson
Professor of Technology and Ethics, Hertie School, Berlin
This
is an excellent document, and you provided an excellent questionnaire.
Thank you for your competence for this important task.
Overarching Concern
The
world has changed since this document was commissioned. AI is a
pervasive technology, and it is not always clear what artefacts contain
it and what do not. We should not waste time arguing about this. What is
clear is that the digital revolution / ICT have changed the nature of
governance, of property, and the role of nations, particularly the
extent of their interdependence. My belief is the EU is presently
leading the world in terms of coordinated effort over 100 Millions of
people. However, I very much hope this will change and newer, better
models will be discovered by us and by the other peoples of the world.
We need to be thinking always of the future.
There’s
no question that the new age we are entering is typified by AI, but
there is considerable question about exactly what is changing and how. A
lot of what we characterise as problems of AI are really problems of
the digital domain, underregulation of transnational commercial
entities, and finance. We cannot solve these problems independently.
The
massive investment here is welcome, but the best software is always
written by those who are actually using that software. We should be very
open to using the largess governments are willing to spend on something
already identified as critical to the economy and to security on the
immediate problems at hand -- recovering the economy from the shock of
lockdown, and repurposing it towards a sustainable future in light of
the climate crisis and related crises such as migration and health.
The
EU is often a beacon leading the world, and should be looking to
cooperate with everyone while keeping our own house strongly functional.
But I think we should be particularly focussed on cooperation with
Africa. Africa is in our timezones, has enormous talent and resource,
and is particularly well poised for sustainable technology such as the
low-energy computing algorithms mentioned a few times in your document.
More generally, we need to facilitate agile coalitioning to allow us to
consolidate the support of the global majorities benefited by the rule
of law, and work around obstructions like temporary or localised
corruption. AI and the digital should facilitate all this.
As
I mentioned several times in the questionnaire, the biggest problem you
have not properly addressed (besides transnational regulation and
revenue dispersal) is liability for the manufacture of products that
provide a service. This is a new category. More generally, EU law and
processes does not deal well with recognising contemporary market
domination. Market domination does not only create inelastic demand
relative to changes in price. It can also create inelastic supply by
vendors unable to seek better prices elsewhere, or indeed by EU citizens
unable to find similar services elsewhere so coerced to give their time
and data to a limited range of companies.
The
document as it stands unfortunately still propagates a few myths,
notably that AI is something other than software (why no mention of code
in the list of documentation required?), and that AI is necessarily
opaque. Systems containing AI are no more complex or incomprehensible
than hospitals or indeed governments, but we still manage to regulate
and inspect these institutions. In contrast to traditional human
institutions, every aspect of the development of a digital artefact is
particularly easily recorded digitally, and therefore can be designed
for transparency with accountability in mind. This needs to be
understood and enforced.
Detailed corrections to the document:
P1
"AI entails risks" -- no, it affords them. Unless you are talking about cybersecurity, that is entailed for digital artefacts.
“This
is a chance for Europe…” I agree with this paragraph, but would go
further. This is an excellent opportunity to coordinate strategy
facilitating economic success and the security of member nations.
P2
Data are not as important to AI as this says. Computing power is though.
AI
is pervasively and widely used. “Big tech” has consolidated massive computational resources which need replication / “airbussing”, and are
as much about cybersecurity as AI. The amount of AI is NOT directly
proportional to the amount of data, much data is redundant except for
surveillance purposes (or even for those).
"Given the major impact…" This and the paragraph following are superb.
P3
“European data pools enabling trustworthy AI”... AI does not necessarily “inherit” values directly from data. If we want moral application of AI, we must ensure that it transparently does the will of a moral human or moral human organisation.
“European data pools enabling trustworthy AI”... AI does not necessarily “inherit” values directly from data. If we want moral application of AI, we must ensure that it transparently does the will of a moral human or moral human organisation.
Trust
is a human relationship, only humans can be held to account and
therefore afforded liberty. To trust the people behind institutions
requires transparency for accountability, and social equity and mobility
so that any individual can know what they would need to do to know
enough to take further advantage of that transparency.
P4
I’m
skeptical about some of the claims about neuromorphic and quantum
computing. Quantum computing looks to be exceedingly energy consuming to
make up for the amount of time and space it saves. Computation is a
physical process, it cannot be cheated. Unless we really do achieve
energy independence, quantum is likely to only be wielded by very large
rich organisations like states.
P5
€20B
requires that we not only facilitate AI but solve major social problems
on the way, specifically reinvention of the economy around agile and
sustainable industries.
“The coordinated plan could also address…”, No, it MUST also address…
P6
“Europe
cannot afford to maintain…” Diversity is power! Diversity is strength
and innovation! It is costly in terms of coordination, but AI helps with
coordination. Do not trust pressures to consolidate. Rather we need to continue leading and innovating on heterogeneous cooperation.
“Where
Europe has the potential to become a global champion…” Due to ICT and
improved communication, education, and nutrition, the world is
increasingly agile and dynamic and fair (global inequality has been
declining for two decades up until the pandemic). Do not assume sector
advantages or disadvantages are permanent.
“Initiative could also include the support…” again, should not could.
P7
It’s
true that programming, like sport, has superstars. But also like sport
the combined impact of what happens in local communities and schools
probably matters far more to overall wellbeing than the few superstars
with familiar names.
“It will be important to ensure that SMEs can access and use EU AI and computational resources.” They can and already do use AI from other countries.
Besides
my earlier recommendation about airbussing local tech sector
equivalents, another possibility would be encouraging the existing
transnational corporations operating in Europe to disaggregate and open
local corporations based on their expertise in establishing full AI
infrastructure. This requires reducing the market dominance / power
advantages presently enjoyed by a few elite institutions such that they
are refusing disaggregation that would substantially benefit them
financially e.g. youtube outside google.
One
of the biggest problems is transnational finance, which dwarfs the tech
sector. We need to put something together that allows us to bypass NYC
and London for finance. I strongly recommend Katharina Pistor’s “The
Code of Capital” which is largely about law.
P8
“Without data, the development of AI is limited in some ways.” Certainly not impossible!!
“The enormous volume of new data generated daily constitutes an opportunity for Europe(...), and guarantees that our policies restricting data use cannot cause lasting damage.
Again,
I very much like section H, though in the first paragraph you might say
a bit more about the role of the EU / nations in enforcement.
P9
I also love the HLEG list :-)
P10
I
don’t like the proposed categorisation of risk. This is a smooth
continuum that can change quickly. I propose that corporations are held
strictly liable for the damage they cause, and that the EU provides
instruments by which a corporation can demonstrate due diligence etc.
and limit its liability. Then corporations can decide for themselves the
level of investment they need to make and monitor this for changes. Of
course the EU regulators should also seek to look proactively for
misjudgements, but the onus should be on the corporations.
P11
“which
may sometimes be difficult to understand and to effectively challenge…”
no, AI properly applied makes us explicitly specify the processes of government,
making them more clear, not less. Again, we must insist on transparency
and accountability.
P12
“Black
box effect” again, all this is fixable with adequate documentation of
the procedures by which AI was developed, and adequate regulatory
motivation for producing clear, transparent systems. This also makes the
systems easier to maintain and extend, so would be a boon for any
industry doing business in the EU.
“Risks
for safety”... nothing about AI makes these paragraphs different than
if there was a problem with brakes. No sympathy here. There are plenty
of similar risks and they can be priced.
P 13 box at the top see just above.
P14
First paragraph is great.
First
bullet is terrible. A lack of transparency is negligent. AI is not
necessarily opaque, not all AI is opaque. This is laziness asking for
government handouts.
“Limitations”
I am not an expert on legal liability, but not having liability for
services particularly when the services are produced by a manufactured
artefact seems like a real problem to me, a major loophole.
“Uncertainty”... “including all components e.g. AI systems” You should be talking about software in general here, not just AI.
“Changes”...
again, this is true of software in general. Now is the time to fix
this, AI is just drawing your attention to an underregulated sector.
P15
“Collaboration
with humanoid robots” ??? why would we present tools as collaborators?
But this problem applies not just to robots. Some people are suggesting
that all natural language systems are changing the families that use
them and the way they communicate.
P16
C “Scope of the Future EU…”
This is all terrible. Do not allow these limits on AI regulation. Just regulate software in general!
Google
has been trying to claim all their products improved a couple of years
ago “when they started using AI”. They were founded by PhD students in
an AI lab, they are using a core AI technology (search). Just walk away
from these arguments. Regulate software and transnational business. (By
the way, web search is based not only on our personal interactions with
the search engine, but also our web pages, both of which are our data.)
“AI is composed of data and algorithms” wrong!!! As you said on page 2, it is (also) composed of computation,
in fact some AI doesn’t use data but none doesn’t use computation. The
tech giants are computation and cybersecurity giants even moreso than AI
giants.
“Humans determine and program [no extra e for this kind of programming, programme is a noun] which an AI system may optimise for, but
humans also determine what means the system has to achieve these goals,
what information it has access to, and any other resources it can
control.”
Footnote 47 is way too much detail, see above.
P17
I love the second paragraph “As a matter of principle…”
I love the second paragraph “As a matter of principle…”
P 19
Delete the first sentence.
We also need regulatory bodies to detect and prosecute the misuse of personal data.
Other documents you need: software program code; architectural diagrams for the system; specifications for the system.
P20
"Separately,
citizens should be clearly informed…" do they need to be informed when
they are talking to a person but being processed by AI? But by all
means, it should be readily apparently if they are interacting with AI; there is no reason or justification to anthropomorphise it.
“Requirements
to ensure that outcomes are reproducible” -- this is a within-AI
culture war you should probably steer clear of. Just make sure
individual outcomes are easily contestable, and well documented so that
varied outcomes can be retroactively debugged.
“AI systems can adequately deal” -- not the systems themselves, those using them. Don’t make AI out to be the actor.
P21
More
generally, all new rules/laws should be written in the context of
existing laws concerning manufactured products, and bringing software in
general (with or without AI) into that scope.
The first paragraph of e “human oversight” is problematic. Be careful of “moral crumple zones” (Elish, 2019).
“Human review must be possible afterwards” and available!
“Monitoring
of the AI system…” and external agencies detecting if things are going
wrong, e.g. misuse of personal data, perversions of the market or of
citizens’ personal time.
P22
"It is the Comission’s view" --- this is excellent and very important! Products that obscure this should be the liability of the developers, as they are the ones who could have ensured transparency.
P23
“Verify and ensure” -- this may be too strong. Don’t allow oversights to become the regulator’s fault. Developers must assure transparency.
P24
The governance section is excellent, especially the first paragraph. But regulators should also actively regulate -- monitor for indications that things are going wrong, investigate reports of wrong doing.
P25
Engagement is needed, but leadership and enforcement should be by the state(s).
Comments