The Last Push: A Few Quick Bullets for Sleep Deprived AI Act Negotiators

 A Few Quick Bullets for Sleep-Deprived AI Act Negotiators

  • Get it done now. Don't give the corporations who are quite likely to benefit from the improved, clarified markets (just like they did with GDPR) more time to lobby, or more motivation to spend money on the EU elections. If necessary, do the DMA trick and break part of it off for later, but I don't see why that should be necessary. Updating and improving laws is a much more ordinary part of the EU legislative process than the US one.
  • Keep the AI Act a low-cost, very-general regulation. The recently-shoehorned-in demands for environmental and social impact assessments have nothing to do with AI and don't belong here. These are extremely important horizontal concerns that, like liability, belong in their own regulations. They are in no way special to AI. Don't give developers or deployers reason to avoid the relatively simple, inexpensive compliance needed to keep our societies secure while utilising automated decision making.
  • Biometrics: Only allow surveillance of specified individuals of interest, not of everyone in specified times. In public spaces, we should only have passport-like checks for specific abducted children and suspected terrorists, not Chinese-like tracking of the whole society.
  • Generative AI: Only require high-risk clearance for models used in high-risk systems. It's fine to let smaller organisations make proprietary models. The amount of money required to create the really powerful, large models is prohibitive, so that will only be done for sales to people who will anyway need to source high-risk-compliant components, or by governments / militaries for their own purposes. 
  • Don't trust the NATO|OECD|G7=US processes; they are only building a moat around recent, expensive models. Remember the Google leak "We have no moat, neither does OpenAI." US companies sinking huge sums in models are willing to sacrifice some autonomy to slow their competition, and of course to disrupt regulation of the actually profitable parts of their or their funders' business.  We have known we needed the AI Act for years, maybe decades, yet all the supposed charters / self regulation the US and entities of which it is a member are proposing are only applied to these new models of unproven value (yes you can use them for some things, but do those benefits outweigh their costs? They certainly aren't paying for themselves yet.) 
  • Don't worry about GPAI (General Purpose AI). The only real alignment problem is the one governments are solving all the time – keeping all of society's interests aligned enough that we can all flourish securely together.
See further my open access article from a few months ago,  The European Parliament’s AI Regulation: Should We Call It Progress? 

Related posts in this blog:

Time's running out!
(Or we'll have to wait until October 2024 to restart the process, with a new parliament and president.)


Comments