ML people: Product law works like a random forest, with the smartest possible trees – you!

Product law works like a random forest (wikipedia).  All the smartest people in a sector figure out how to run their companies well and build safe, reliable products. The sector's trade magazines (we in AI can use conferences if we like) document the best and standard practice (experts can also be called into court to just report sectoral consensus if there's a case where there's a dispute.) 

The government doesn't have to be AI geniuses, or keep up with the rapid state of change, or whatever. It just has to hire sufficiently on-the-ball people to be able to tell whether a company's documentation of their systems prove that that company has complied with standard practice, or if it's a leading company, best practice. Note that it's fine for such documentation to be "self documenting" systems, though I really recommend keeping around updated documentation of your architecture, as well as your revision control logs, cybersecurity logs, etc. 

If you have failed to do the right thing, you are liable for any damage you caused, or you may be proactively fined. Government is used – by us, the people who constitute our states – as external enforcers to simplify the problem of policing a sector against bad actors. Of course, government is also used for redistribution of wealth to important projects, be that funding for developing sectors, or critical infrastructure including security and an educated, healthy population of employees and customers. This is why we pay tax.

I'm not sure who needs to hear this, but it's inspired by Yoshua Bengio's Facebook post the other day, about how he'd love for us to prove he's wrong to be worrying about AI, and go back to just researching it, which is what he loves.

No. He's not wrong to worry about the need to care about AI policy. I criticised everyone who threw their weight behind the Future of Life open letter (linkedin) because it was more disruptive of AI regulation than helpful support of our now very-advanced efforts in this area. (Much like Open AI's recent statements (this blog), despite the fact Altman said he was also critical of aspects of the FoL letter.) Update: [ I am also critical of anyone who supported the new "sentence". You ought to know better. (linkedin from 31 May)]

Those of us who are professionals in fields that can alter including harm society must, well, be professional. This is why people are talking about licensing developers just like they do lawyers, architects, and pharmacists. Part of that license includes knowing how you as a cog in a machine fit into the larger machine that helps regulate (that is, nourish, this blog) your sector and society more broadly. We have to allocate time to making sure things don't go wrong. How much time? That depends on how likely it is for things to go wrong, and for each of us individually, where our power and therefore responsibility lies.

I know, like, and immensely respect Yoshua because he has put in a huge effort on the GPAI Responsible Development, Use and Governance of AI Working Group (pdf), including co-leading it the first year, when the name of the "Responsible AI" working group was changed to that (a change that now seems reversed.) I also participated in a great workshop he organised where he seemed quite surprised that there were entire fields of research many going back decades or even centuries studying the problems he'd recently been noticing through his ML lense. Please people, respect the processes that have got us this far. Take some time and learn about governance, regulation, and the social sciences, or hire people who have.

the forest around  Felsentor 'Kuhstall' "the cow shed" in Saxon Switzerland National Park
the forest around  Felsentor 'Kuhstall' "the cow shed" in Saxon Switzerland National Park
some trees already start from a pretty high point
(my picture, 2021 I think. This was our first trip out of Berlin after Brexit, during COVID)

Comments