More Skeptical About Standards (and value-aligned design and AI non-exceptionalism) – AI Regulation Notes 2

Terrific Hertie School and Sciences Po postdoc Rachel Griffin just wrote me about this terrific Lawfare article by  Hadrien Pouget, The EU’s AI Act Is Barreling Toward AI Standards That Do Not Exist.

I have a 2017 article about AI regulation and standards with the also terrific Alan Winfield, Standardizing Ethical Design for Artificial Intelligence and Autonomous Systems. That article has come to be heavily cited. Perhaps that's why I was asked to attend two different meetings about AI & Standards last year (2022), and to write up my opinions about the problem for each of them.

The second (more recent) version of the piece I wrote them each is presented here as another guide to AI regulation. Actually, my concern that standards are too much used for regulatory capture and interference is just 1.5 pages of that 4 page documents (minus header & citations). The other main parts are 

  • .5 page of definitions (as usual); 
  • 1 page on how what a lot of people mean by "value aligned design" is neither human-centred nor plausible (though VAD could be if you changed it to mean "aligned with the values of your human owner/operator" – that would be properly human centred); and then 
  • a further 1 page precise of my arguments that justice relies on sufficient equity for enforcement. (It sounds banally obvious when I phrase it like that, but when you realise that means artefacts are not peers and allowing too much inequality destabilises the planet, it gets more interesting.)

So anyway, click that/this link to the AI regulation recommendations if you do have enough time or motivation to read past this tl;dr. But I should also say, the guy in charge of making the EU standards work, Sebastian Hallensleben, is also terrific, and probably something good will come out.

Don't Panic about that asteroid! Nice one, brand eins .de
(and speaking of things barrelling towards you)



Comments