OpenAI reveals the anti-regulatory intent of its regulatory disinformation

Sam Altman is speaking in Munich today. Yesterday in the process of switching his misdirection about AI from impossible scenarios (AGI) to the one we're already solving (superintelligence), he fully revealed his anti-regulatory, anti-government agenda.

While asking governments to help him and his backers maintain a cornering of the NLP generative AI market [1], he (and two coauthors) also said:
We think it’s important to allow companies and open-source projects to develop models below a significant capability threshold, without the kind of regulation we describe here (including burdensome mechanisms like licenses or audits).[2]
In other words, the authors are saying:
  • AI and software are not a product (follows from 2: no need for audits, proving due diligence, professionalism of employees), 
  • do not inspect whether our companies are following basic business standards of following and documenting good practice (2: no audits), 
  • do not ensure we are hiring qualified staff, educated about how to do the best work and what kinds of things can go wrong in a company like ours (2: no licenses),
  • give anyone like me a large burden I will also accept. Given I'm already in the lead, this should help me more than hurt me (1).

I really hope colleagues at 
Technical University of Munich are smart enough to throw the (several!) books at him.

I've been asked to attend, but I'm already committed to a keynote for the European Association of Work and Organizational Psychology Congress about the impacts of AI on the Future of Work. But I still want to help out. So please do consider reading:
a statue of a lion lying in wait in front of a table for a talk
The lion in front of my most recent talk: Authorship, Agency, and Moral Obligation (at CADA)

Addenda: 
  1. Please note that as of October 2023 this particular regulatory interference strategy is ongoing. The much vaunted G7 Hiroshima Declaration on AI Governance only tasks them to handle generative AI, and keep the data flowing (articles 39 & 39). It also champions the OECD and GPAI while failing to mention the extremely successful efforts of the UN's Internet Governance Forum (IGF). See my more recent blogpost On Global AI Governance, which also links to my recent academic article on the topic.
  2. Joshua Wöhle in comments on LinkedIn spent some time challenging me over whether it is really anti-government to ask for government regulation. Wanting the government to help you corner a market (1), but not to offer product safety (2) is actually only anti liberal democracy, not anti-government in general. Re 2: OpenAI's posts claim to aim at benefiting SMEs ≈ innovation by freeing them from ALL product regulation isn't really beneficial. Much regulation is helpful to a sector – governments regulate to strengthen their economy. Re 1: Open AI saying to police hard those getting too close to their capabilities is an attempt to dig the "moat" around very big tech that Google (in a leak) said they lack. Favouring very large companies as 'national champions' tends to undermine both democracy and innovation. At the turn of the 19C, the US invented the theory about these harms and how to address them (antitrust), but stopped applying it to digital tech at the turn of the 20C century.
  3. Also on LinkedIn, I commented on Gary Marcus' suggestion that Generative AI may not be as transformative as everyone appears to be expecting (except maybe of copyright law enforcement.) I both think generativeAI is being widely used, and that it's being massively overvalued. Respected colleagues are saying it speeds up what they do, and there's that study by MIT's Noy & Zhang published in Science showing it helps weaker writers write copy more like stronger writers. But the estimates of economic impact seem way out of line and I agree with Gary Marcus' AGI assessment (though the hype could be also partly be the outcome of the full PR assault on especially EU regulation). Every time since at least 2007 that AI improves productivity, it seems like the markets just somehow kind of normalise that, then talking heads all sit around and ask where the GDP trail of AI is. I'm interested in why that happens, but I don't see this time being different. (I do see me writing a lot more about this phenomena one day soon though.)

Comments