I actually also work on biological evolution & study gene regulatory networks, so maybe this is why "regulation" is not a bad word for me. In talks, I define regulation as the means by which a complex entity causes a recognisable version of itself to persist into the future. In biology, we happily talk about up regulation and down regulation, which in tech I map into the fact that AI nearly always has enormous government investment (as every government wants a healthy digital economy), but this "up regulation" should be balanced by constraints "down regulation" to ensure that the nations and societies in which the corporations operate are also able to flourish.
About 6 months after the GDPR went to effect, the same GAFAM types who had been warning me they were all about to pull out of the EU (yeah, right) were saying "Wow! It's now so much easier to do business in the EU! It's like a single API to 28 countries!" As if this was some kind of weird accidental side effect of the GDPR. Strengthening the EU digital economies was the point of the GPDR! But before the EU did that, its members also made sure to protect their citizens' and residents' human rights, as is every nation's obligation as signatories to the UN's Universal Declaration of Human Rights (yes, we all signed), incidentally.
So of course absolutely every complex entity – cells, people, corporations, governments, religions, whatever – of course they all self regulate, or they wouldn't persist! But the point is that good governments and good governance make that self regulation easier. You don't have to police your own sector to avoid races to the bottom etc., you can just help your government to do that. Of course, corporations and their sectors must also in this sense help regulate governments too – as is becoming conspicuous in this period of democratic backsliding (think of Twitter & Facebook suspending extremist accounts, even when one belonged to the head of a G7 country.)
So that's my model of regulation. I don't think we ever want to avoid it. I do think we perpetually need to rebalance how much is done by which entities, particularly in response to innovations. We should of course optimise to make it consume as little resources as possible whilst still being effective. And note that the metric for regulation being effective – long term persistence – absolutely incorporates the UN's Sustainable Development Goals, which of course must and do include human security and healthy economies.
Someone's got to plant a lot of trees if we're going to hit the UN SDGs / save the planet. |
This is from a conversation with other members of the GPAI governance committee, which right now I'm one of two co-chairs of. Germany nominated me to GPAI just a few months after I moved here.
Comments
GDPR notices are a huge eyesore and have made the world significantly worse by causing billions of minor irritations every day across the globe.
The law requires full explainability for AI systems in many contexts which greatly limits AI development. We don't require full explainability for human decision making (ie at the neuronal level) so why should it be required for AI?