Feedback on version 3, GPAI Code of Practice for the EU's AI Act

 For those who don't know (background)

The EU's AI Act includes provisions for getting expert help in describing how to comply with it. These include the publication of a Code of Practice (CoP), which is to pass through four versions, the fourth being "final", though EU legal documents such as these are regularly revisited. 

There has been some concern that this process was being used as a "Trojan Horse" – that after years of taking expert advice, the legitimate governance procedures of the EU were being bypassed by a bunch of regulatory amateurs, many presently or previously on the payroll of companies with known policies to block the efficacy of EU digital regulation. There was a rumour that showing compliance with the CoP was a "get out of jail free" card for the AI Act, so the actual Act could be disregarded if the fantastic requirements of the AGI / Safety narrative believers were somehow met.

In fact, evidence of effort to comply with a CoP is just one piece of evidence for due diligence that a court will look at when deciding liabilities. 

The current version isn't that bad (mostly)

The CoP has been split into four parts, and the first part specifically spells out that statement: Damaging violations of the spirit of the AIA are still culpable even if the letter of the CoP has been met. 

The greatest (and as far as I understand, original intended) value of the CoP is drawing attention to the parts of the AIA various forms of companies really need to read, and also giving illustrative examples of how those requirements might be complied with.

Suggestions for Part I / Opening Statement

p.5 ¶4 "Therefore the Code shall strive to facilitate rapid updating…" I would prefer to see in this section reference to the need for stability of regulatory frameworks. This is after all a big part of why we innovated governments in the first place – business, families, and individuals need to be able to make plans on future investments. Presently it only trades off flexibility with commitment.

The list of resources that may change regularly should include trade journals and even respected, recognised news websites or blogs. Courts use these for establishing both standard and best practices.

¶5 "should take into account the size of the GPAI model provider."  Disagree. What matters is the scale of the harms, not the provider. No one should take on a project they are not adequately resourced to handle the liabilities from.

¶6 "AI models are global issues" True but largely irrelevant in this context. EU legislation is to protect EU residents, citizens, and markets. No one should be "protecting" other nations without their consent. 

Bottom of the page (no number): "[current draft assumes] only a small number of both GPAI models with systemic risks and providers." This is a bad assumption. I have believed long before DeepSeek came on the scene that the future of AI will be low energy and decentralised. The pressure for ever more data for more intelligence of the general kind makes no sense in terms of basic statistics. Even intelligence of the surveillance / security type has some limits to where it just becomes redundant, and unnecessarily consumes resources. The distorted scale of a small number of companies may well be based on a bubble that will soon collapse. As stated above, what matters are the risks, which all must protect against. As it stands, this assumption sounds again like someone trying to build a moat (or corner a market.)

p. 7 I love the preamble!

It makes my own preamble above evident. 
¶c: "This" starting the last sentence is ambiguous.

Unfortunately, the next page ignores some of the statements in the preamble.

p.8 ¶I. "The Code should also enable" no! assist! "...rely on the Code to demonstrate" no! indicate!

Commitments (still Part I)

p. 9 ¶II  I believe that either the entire sections on GPAI with systemic risk (GPAISR) are either useless (because they apply to no one) are are likely to apply to a lot more people than the authors state. Given these two options and the amount of work here, I strongly advise the authors to choose the latter. Focus on proportionality of risk, not scale of company. Everyone has a slight chance of creating systemic risk (look what Facebook did in 2014!) Therefore every organisation producing AI products should monitor and be monitored for such risks, and whether those risks are scaling. 

In general these sections spend too much time telling companies to navel gaze, and not enough saying they should read the newspapers, and hire people who really understand systems engineering, safety critical systems, governance, and so forth. They shouldn't put their best ML talent on an island to reinvent wheels together.

p. 10 ¶ II.2  Nice to mention pre release, but if we're going to worry about anything I'd worry about obsolescence. What if the company has long been bankrupt when people figure out how to misuse its code, or physical products still in use and on the Internet? Consider also purchase by hostile actors, cf. twitter.

¶II.4 is very good, esp. the "varying degrees of depth..." part.

¶ II.7 isn't this too much detail for the abstract?

p. 11 ¶ II.7  I strongly object ever referring to AI systems themselves as responsible actors or bodies. Indeed, any such reference is opposed to the UNESCO Recommendation on the Ethics of AI, to which the entire EU is signatory

see ¶35 & 36 of the UNESCO document "Human oversight and deterimination", also ¶68. The AIA fully complies with this; so should the CoP.

so delete "or AI systems" after "humans".  Also, why only "non-state"? Why not "state and non-state"?

Transparency Section (Part II)

p. 1 Again, you are not in compliance with the Preamble in part 1. In the very first paragraph, change "to comply with" to "as evidence of compliance.

p. 2 "website, or another means" -- insert "another readily, instantly, and freely available" between another and means?

Perhaps the AI Office should host an archive for all transparency documentation, particularly with an eye to companies without websites, including those that have gone bankrupt or otherwise disappeared?

Copyright Section (Part IV)

This is excellent practice. It just highlights what the practitioners need to know. This is how part III should be.

I'm not an expert in copyright, but I didn't see anything to worry about given it was all just clear reference to and explanation of the AIA. Thanks!

[update: I've been told on LinkedIn that this copyright part is a disaster. I don't know why though. But I can readily believe that person knows more than me on this topic.]

Safety and Security Section (Part III)

p. 1 Again, you are not in compliance with the Preamble in part 1. In the second paragraph, change "can comply with" to "demonstrate effort in compliance."  Also, delete "leading." Even if it is only a "small number" of companies, if they are problems they are not necessarily leaders. 

Why is this 61 pages? It does not have 61 pages worth of content. The 4 pages of the copyright section has more content.

Delete "a blueprint of one way to comply" 1) this is nothing like a blueprint. 2) replace with "an indication of appropriate effort and means of compliance"

p. 2 ¶b "not AI systems" should be "not all AI systems," You said in the preamble that GPAISR is a strict subset of GPAI which is presumably a strict subset of AI!

¶c nice discussion of "proportionate to the systemic risk" citing AIA. But I strongly disagree that "the lack of corresponding expertise of the provider" should be listed with the other concerns, since that one is fully addressable. 

I very much like ¶d & e. But ¶f should replace "Objectives of the Code" with AIA. And further,

it is not acceptable to change definitions here from what they are in the AIA. Do not confuse people. Make up new terms and/or use subscripts if you really want to deploy different definitions.

What is going on in ¶h?? The Precautionary Principle is NOT "laid down" in Article 191 TFEU. It's just mentioned in passing in an article about environmental standards.

Artefacts are not like the ecosystem. They are entirely subject to design, and entirely the responsibility of those who build, purchase, and deploy them. That's what the AIA is about, and that's what this section must aid in maintaining.

p.5 ¶1 "the greatest level of detail possible given the current state of the science" -- what happened to proportionality? This strikes me as a Herculian –  or Sysiphian – task.

p.6 ¶ex 1 "A GPAISR can be used to completely automate..."
ex 2 " A GPAISR is used to sabotage..."

"Mitigation to stay below the risk tier..." Folks, this is

  exactly what the AIA is for. You do this by being regulated. You do this by ensuring that you don't build or release products that cannot be responsibly used. You do that by responsible systems engineering, devops, and engagement with third party auditors.

¶II.1.3(1) | delete "if such capabilities are not yet possessed by" replace with "or state if" – why would you excluded preexisting capabilities???

p. 7 "Fulfilling Measure II.1.4 does not require input (or authorisation ) from external actors" False. Look at Boeing. Everyone needs external audits and external regulation.

p. 9 ¶II.2.2.(3) Again, this is very unlikely to be detected by the agencies that are building the system. You need external audits.

p. 12 ¶II.4.1  Focus should be on transparency, to the organisation itself (including but not limited to its C suite) downstream purchasers and deployers of that system, and upstream regulators including external commercial auditors. External inspection is critical to eliminate corruption, to defend against corrosion of values, and to maintaining contact to the state of the art.

p. 13 ¶ II.4.2 (1)(d) I'm not sure the open source part is necessary. I am sure that this should be extended to all other EU regulated software. 

This is not just elite companies playing games by themselves. That's brought you what's happening in the US right now. We are creating a system; you will be able to trust anyone else able to operate in the EU.

p. 14 ¶ II.4.3 I can't believe signatories need to be told this. This is all way too long. You can assume basic competence by referring people to the state of the art. 

p. 16 the first "Potential material for future Recital" is very much like what I've been calling for with all this discussion of external agencies in the last few paragraphs! Why not incorporate this?

p. 28 ¶II.7.7 Disclosures to trusted auditors are necessary, and not like publishing.



Comments