Full response to the April 2026 UN Consultation

In your opinion, what outcomes would make the first Global Dialogue on AI Governance a success? (Max. 300 words)

I am already impressed by the commonality and level of understanding shown in the UNESCO Recommendation on the Ethics of Artificial Intelligence. Anything that clearly continues in that vein would be very welcome. Significant makers: First prioritising the pragmatic necessities of human accountability for and control of AI. Second, clearly stating that AI is an artefact, subject to design, and in most cases a product, for which due diligence and legal liability must be upheld. Third, no nonsense about how AI itself is an existential threat, but rather all level-headed acknowledgement of the necessity of sufficient investment in regulatory capacity. This must include (somewhat tangentially) adequately balanced power and wealth such that the union of nations can maintain checks and balances, and assure extreme abuses such as wars of aggression are not feasible or at least not sustainable. 

Please briefly explain your selection. (Max. 300 words)

(My selections were:

  • Social, economic, ethical, cultural, linguistic and technical implications of AI
  • Interoperability of governance approaches
  • Protection and promotion of human rights
  • Transparency, accountability, and human oversight)

Cultural and linguistic implications of AI are presently fairly well understood and defended, though this work is critical and should not be undermined. But I selected this option because the Social, Economic, and Technical implications need constant vigilance. Because we continuously innovate new ways to deploy our new powers, maintaining defences for the well being of all peoples requires serious examination of these topics. Breaking this bullet into those two sections – the arts and the industry – I believe the ethics will necessarily follow.

Governance approaches should not be identical or overly centralised – this would facilitate regulatory capture. Rather, we should think about interoperability, allowing jurisdictions to specialise and innovate in the which harms they police and enforce against, which benefits they pursue and amplify, but also ensuring that regions don’t become so isolated that international industry is overly, disproportionately, or unnecessarily hampered. 

My greatest concern about AI is the false narrative that it can replace humans, and the evident devaluation of human lives we have seen in some regimes, who have chosen to reduce investment in health, education, and wellbeing of some of those within their borders, or worse, even to exclude or annihilate entire populations. 

Accountability must be maintained for us to continue improving the human condition, transparency (of technology, and up into power, not down into the surveillance of ordinary individual lives) is an important mechanism when used in moderation with specific goals of first, accountability, and second, safe deployment. Please note: human centering, not just human oversight should be our goal. AI must express the intentions of its operators, so the operators can be held to account for what they do with it. This is the only real way to establish ethical "alignment," ethics develops with societies as we grow in understanding and capabilities.

In your opinion, are there any cross-cutting or emerging issues not captured by the listed themes above? If so, please explain. (Max. 300 words)

As I mentioned, concentrating power simplifies regulatory capture. Regulatory divergence can be useful. Sovereign nations need to remain sovereign, though they may cooperate and coordinate on enforcement.  Similarly, we need diversity of providers, for both innovation and resilience. Only a very few digital / AI services actually ‘naturally’ scale. We should be vigilant that monopolies are not reducing this diversity. A separate but related and urgently important matter is how we deal with the power and equitable redistribution resulting from transnational digital services that do so scale. Examples: search, adtech, computer operating systems, logistics, political communication (social media). Note that historically monopoly has been recognised and addressed even if one company had 10% of a market, right now one company has about 90% of search. In my opinion, one of the great contributions the UN could make is facilitating the negotiation of treaties to address decentralised, transnational regulation of these entities, including equitable redistribution of the revenue deriving from being a transnational utility. 

How are the governance gaps and related developments/advances in the thematic areas you selected above affecting your country, region, or sector? 

Please highlight the most significant challenges and opportunities. (Max. 300 words)

In my opinion, the EU is one of the best-regulated areas for AI and digital services more broadly, with the possible exception of China. Note incidentally that China quite successfully protects the data of its citizens and controls the power and capacities of its AI firms, contrary to some narratives we hear in transnational digital governance. But returning to Germany and the EU, our primary problem has been lack of enforcement, and all sorts of threats and campaigns against our sovereign right and duty to protect our citizens and their democratic processes. Despite providing a stable, reliable market and at least 20% of the revenue for US tech giants, we are constantly viewed as some form of enemy. Even the EU is not powerful or rich enough to go up against two super-powered economies on our own, which is why we need to work with other middle powers to create an economic and regulatory context that abides within the rule of law. 

With respect to academia, the problems are very much along the line of disruption of our historic models of teaching, and separately assault by those who do not appreciate the dissemination and archiving of scientific and historic information. In this, our sector is very much like journalism. We need help or at least peace while we reorganise our businesses and learn, but instead opportunists opposed to the dissemination of narratives that cannot be controlled through political processes, but rather that are grounded in expert processes of vetting, those opportunists are seeking to dismantle or control our sector.

What role can the AI Dialogue play in advancing international cooperation on AI governance? (Max. 300 words)

Please see my previous answers which addressed this topic.

What are some of the existing initiatives, partnerships, or mechanisms that the AI Dialogue should build upon or connect with, and what added value could the AI Dialogue bring? (Max. 300 words)

The UNESCO Recommendation on the Ethics of Artificial Intelligence is excellent and highly legitimate, it should be more widely disseminated, and national law should be encouraged to comply with it.

The European Union (EU) is an interesting set of cases of multilateral approaches to market harmonisation and digital governance and regulation going both well and badly. Other regions are already experimenting with similar and different approaches: I have heard that the African Union finds regulatory harmonisation difficult, but the threat of collective boycotting if any one member’s laws are disrespected still has some utility in dealing with foreign technology superpowers.

As I mentioned earlier, we (especially, but not only, mid and small sized states) need to be able to address matters of power and equitable redistribution stemming from transnational digital services that scale such that they are unlikely without effort to maintain adequate diversity of providers. Present candidates include: Websearch, adtech, computer operating systems, shipping logistics, cloud services, political communication (social media). Note that historically monopoly has been recognised and addressed even if one company had 10% of a market; right now one company has about 90% of search. In my opinion, one of the great contributions the UN could make is facilitating the negotiation of treaties to address decentralised, transnational regulation of these entities, including equitable redistribution of the revenue deriving from being a transnational utility. 

How can different stakeholders contribute to the AI Dialogue? Please share recommendations for the format and structure of the AI Dialogue. (Max. 300 words)

Facilitating travel is important – if there's any one thing we should be doing with the finite resources of our planet, diplomacy is that thing. But facilitating hybrid participation is also important, because the time of key experts is also pressed.

Which voices, communities, or perspectives are currently underrepresented in global discussions on AI governance? How could they be included? (Max. 300 words)

I am sure others will answer this question better than I can myself. Perhaps we need inclusion of at least some representative individual states of hyper states like India, China, and the US. 

What innovative engagement formats could most effectively foster meaningful and dynamic engagement during the AI Dialogue? (Max. 300 words)

Again, this is not really my area of expertise; I just try to help whenever called on, as with this survey (thank you.)

Please share examples of policies, practices, platforms, or approaches that promote effective AI governance or offer concrete solutions to addressing its challenges. (Max. 300 words)

I provided a substantial reply to the UN's previous solicitation on this topic. I have an updated list of such writings here https://joanna-bryson.blogspot.com/2026/04/notes-for-un-global-dialogue-on-ai.html

Comments