The EU’s Artificial Intelligence Act (AI Act) was hailed as a historic milestone — the first comprehensive framework for regulating artificial intelligence globally. It was meant to balance innovation with fundamental rights, turning Europe into a global standard-setter. But a critical part of that framework is now being quietly redefined.
The Code of Practice: Clarification or Rewriting?
The controversy centres on the forthcoming Code of Practice for General-Purpose AI (GPAI) models like GPT, Gemini, and Midjourney. Introduced as a soft-law instrument to guide compliance, the Code is evolving into something far more powerful. Its latest draft proposes rules that go beyond the AI Act itself — creating new obligations for model providers without a legal mandate.
This includes requirements such as external risk assessments before deployment, expanded copyright duties, and liability for downstream use. These were not debated during the legislative process, nor are they present in the final text of the AI Act. Yet the Code, if adopted by the Commission via an implementing act, will grant a presumption of conformity. In practice, it becomes binding by default.
Implementing Acts Cannot Make Law
This is more than a technicality. Under Article 291 TFEU, implementing acts cannot amend or supplement legislation — not even its non-essential parts. That power rests with the legislators. Yet the Commission’s approach risks turning a non-binding tool into de facto legislation, sidestepping democratic scrutiny. The European Court of Justice has made clear that implementing acts are limited to executing existing laws, not creating new ones.
A Flawed Process, A Risky Precedent
The way this Code is being developed compounds the problem. With more than 1,000 stakeholders and no formal process under Regulation 1025/2012, the drafting lacks transparency, structure, and legal safeguards. It bypasses Europe’s official standardisation bodies, while ISO and CEN/CENELEC efforts on GPAI remain in early stages.
In the absence of parliamentary oversight or member state involvement, the AI Office — newly created and largely untested — is steering a regulatory process with systemic implications for Europe’s AI future. This is not what co-regulation under the New Legislative Framework was meant to look like.
The Trojan Horse for Soft-Law Rulemaking
What began as guidance is morphing into governance. The Code of Practice risks becoming a Trojan horse — quietly reshaping the AI Act without reopening political debate. The irony is profound: a regulation praised for its transparency and human-centric approach is now being repurposed via an opaque, rushed, and overly precautionary process.
This kind of policymaking — driven by fear of technological disruption, moral panics, and worst-case scenarios — ultimately hurts innovation. It rewards lobbying from the most risk-averse voices and penalises the startups and researchers Europe claims it wants to empower.
The Commission must not rubber-stamp the current draft of the Code of Practice. It must ensure that any guidance tool remains just that: a way to interpret the law, not rewrite it. Otherwise, we risk undermining not just the AI Act, but the legitimacy of the EU’s legislative process.
If the EU wants to lead on AI governance, it must do so with democratic integrity, legal clarity, and respect for its own rules. Otherwise, the AI Act will not be remembered as a regulatory triumph — but as the moment when process gave way to panic.