Notes for the UN Global Dialogue on AI Governance

flowers / spring
I was asked to participate in a consultation 17 April 2026 in Paris, they wound up accepting half of us participating in this online. It's not listed here but that does link to where you can submit written such advice.

Transparency must be up into power. Surveillance down into private lives eliminates diversity and facilitates repression. Right now we have this roughly backwards, though Twitter was doing a good job of political communication. Unfortunately, X has dismantled and suppressed visibility of large categories of actors.

We need diversity of providers, for both innovation and resilience. Only a very few digital / AI services actually naturally scale; it is a separate (though urgently important) matter how we deal with the power and equitable redistribution resulting from their dominance. Examples: search, adtech, operating systems, logistics, political communication (social media). Note that historically monopoly is recognised and addressed even if one company had 10% of a market. The other thing is to ensure dominance in one sector isn't unduly and unjustly exploited to dominate other sectors.

Concentrating power simplifies regulatory capture. (Regulatory divergence can be useful.) Sovereign nations need to remain sovereign, though they may cooperate and coordinate on enforcement. The EU is an interesting set of cases of such approaches going both well and badly. Other regions may also want to experiment in what is effectively collective bargaining. The UN can aggregate and facilitate, but should never attempt to become a weaponisable centralised enforcer. It should facilitate communication and spread of good practice.

Human centering is not just about human oversight. AI must express the intentions of those operating the AI, so they can be held to account for what they do with it. This is the only real way to establish ethical "alignment," because ethics develops with societies as we grow in understanding and capabilities.

The EU AI Act and Digital Service Acts include good practice on human centring.

If I have time, because Venessa Nurock brought up polarisation, I will mention my own research: this comes back to the redistributional issues I alluded to before. Polarisation is highly correlated with wealth inequality, as is a lot of other forms of volatility.

I also strongly recommend the Finnish effort of AI education which has been used by over a million people. https://www.helsinki.fi/en/news/artificial-intelligence/elements-ai-has-introduced-one-million-people-basics-artificial-intelligence cf https://www.mooc-list.com/instructor/teemu-roos

See also my previous writings:

Comments