Europuls – Centrul de Expertiză Europeană

A Rushed Code of Practice Risks Undermining Europe’s AI Ambitions

Over the past couple of months, the European Commission’s AI Office has accelerated efforts to develop a Code of Practice (CoP) for providers of general-purpose artificial intelligence (GPAI). This initiative stems from a wider regulatory context shaped by the EU’s landmark AI Act (AIA), designed to ensure transparency, safety, and accountability in AI systems. However, the second draft of the CoP has been met with widespread concern across industry and policy circles. Observers warn that the process has been rushed, that certain provisions revive previously rejected legislative ideas, and that several measures stray far beyond the scope of AIA. These issues raise deeper questions about Europe’s capacity to remain competitive in the global AI race, particularly when other regions—most notably the United States and China—are advancing rapidly in AI innovation and investment.

The importance of a measured process

A central critique of the ongoing CoP process relates to its pace. Industry stakeholders were given just ten days to comment on the first draft, and only two weeks later, the AI Office unveiled a second draft. Given the complexity of the subject matter and the volume of input from diverse stakeholders—ranging from large technology firms to small and medium enterprises (SMEs), civil society organizations, and independent experts—such a compressed timetable has raised legitimate doubts about whether policymakers can fully integrate this feedback into a coherent final text.

In recent weeks, several members of the European Parliament and industry associations have echoed these concerns. They emphasize that a valid CoP must be built on a foundation of trust and thorough consultation. Adequate time for review and evidence-based deliberation is therefore not just a procedural nicety, but a prerequisite for crafting a set of rules that developers and businesses can realistically adhere to for years to come. Rushing this process threatens to undermine its credibility and disrupt the careful balance achieved in AIA’s legislative negotiations.

Revisiting rejected measures and overreaching provisions

The CoP significantly expands on the initial high-level measures by introducing detailed KPIs and further prescriptive requirements. While some clarifications and incremental improvements have emerged, these new obligations are widely viewed as excessively prescriptive and insufficiently justified. Compounding the problem, it remains vague on how these measures will actually facilitate compliance with AIA, thereby rendering the document both overly restrictive and functionally ambiguous. This duality undermines its practical utility for model providers and other actors in the AI value chain.

However, several issues stand out in this second draft. First, the rushed process has left incomplete work and casts doubt on whether legal clarity will hold up as more feedback comes in. Second, scope creep sees the Code delving into copyright law in ways that go beyond AIA’s original mandate, creating confusion and overlaps. Third, the resulting complexity risks an innovation chill, deterring smaller developers from investing in new AI models. Fourth, there is a “Goldilocks” problem: making the Code too strict renders it unworkable, while making it too lenient—expressly for SMEs—conflicts with the AIA’s uniform safety standards. Finally, the Code depends heavily on moving targets, such as future, undefined standards from the AI Office, leaving developers uncertain about how to meet shifting compliance requirements.

Furthermore, the draft revives obligations previously discarded during AIA negotiations, such as mandatory third-party assessments of GPAI models—a requirement posing feasibility and cost challenges for smaller developers—and introduces differentiated obligations based on provider size, risking uneven legal standards if “small” and “large” remain ambiguously defined. It also prescribes broad new copyright mandates, typically reserved for specialized IP legislation, compelling developers to expose confidential internal practices and face secondary liability. Perhaps the most contentious provision is the notion that “overfitting” could itself amount to copyright infringement, conflating a routine technical issue with legal exposure and hypothetically erecting an unprecedented liability framework for AI providers.

Taken together, these provisions reflect an alarming potential for the CoP to become an undemocratic instrument, one that unilaterally expands AIA’s scope and disregards the delicate compromises reached in legislative deliberations. By layering on extensive new obligations without adequately demonstrating their necessity or clarifying how they align with the AIA, the Code risks creating both a bureaucratic quagmire and a disincentive to AI innovation within the EU.

Competitiveness concerns and the global AI race

Beyond the specific controversies over the CoP’s text lies a broader question: Can Europe afford to take regulatory missteps at a time when other global powers have moved decisively to bolster their AI industries? According to the Stanford University AI Index 2024, both the United States and China outpace the EU in terms of research outputs, patents, and investment in AI research and development. If the EU’s regulatory approach imposes disproportionate burdens or introduces legal uncertainty, European innovators may relocate or scale their operations elsewhere—a scenario that would erode Europe’s aspirations to become an AI powerhouse.

A report from Digital Europe further highlights the risks, noting that venture capital funding is more likely to flow to jurisdictions offering clarity and predictability. While the EU’s emphasis on ethical AI is laudable, the bloc could undercut its own competitiveness if regulatory frameworks become overly complex or intrusive. This potential “regulatory chill” effect extends well beyond the tech sector, as businesses across manufacturing, finance, healthcare, and transportation increasingly rely on AI-driven solutions to remain competitive. Independent think tanks have also cautioned that over-regulation could shift cutting-edge research and investment outside of Europe.

The need for a thoughtful, inclusive approach

As industry leaders and policymakers weigh the benefits and drawbacks of the CoP, a consensus has emerged around a few key principles. 
First, it is essential to respect the original scope of AIA—expanding or rewriting it through a secondary code generates the risk of confusion and diluting the formal legislative process.
 Second, meaningful stakeholder engagement is critical. The CoP must be informed by real-world feedback from developers of all sizes, as well as from civil society and technical experts who understand AI’s evolving capabilities. 
Third, policymakers should assess how proposed measures will affect trade secrets, intellectual property, and model security. While transparency is a valuable goal, overzealous demands for disclosure could inadvertently reveal sensitive details that undermine competition or even compromise the safety of AI models. 
Finally, the CoP must recognize that investment and innovation in AI do not happen in a geopolitical vacuum. Europe’s ability to become a global AI leader hinges on creating an environment in which companies can build and scale their technologies without being deterred by excessive legal risks and administrative burdens.

Our stance at Europuls Centre for European Expertise

At Europuls, we strive to bridge policy, industry, and civil society to promote effective EU governance. We believe that a well-structured AI framework can indeed protect fundamental rights, ensure responsible development, and encourage the fair competition needed for Europe to become a true AI powerhouse. However, the current trajectory of CoP falls short on multiple fronts—failing to address stakeholder feedback, conflating AI and copyright law without a solid basis, and imposing prescriptive disclosure obligations that could compromise trade secrets and safety.

Rather than rushing an incomplete Code that overreaches in certain areas and remains vague in others, the EU should press pause and re-engage stakeholders in a thorough consultation process. Doing so will help produce a refined, balanced Code—one aligned with AIA’s legislative intent and robust enough to stand the test of time. If Europe truly wants to nurture innovation and develop world-leading AI, it must avoid burdening legitimate AI developers with poorly defined obligations that could hamper growth.
Update cookies preferences