Info@MetaverseLaw.com

Overview: The EU General-Purpose AI Code of Practice

Why Do We Need a Code of Practice?

On August 2, 2025, the general-purpose AI (GPAI) provisions of the EU AI Act went into effect. GPAI models (including models that support most generative AI, like ChatGPT), now face certain obligations in the EU, including requirements around transparency, copyright and systemic risk.

However, the EU AI Act is a framework: it defines obligations but leaves technical details to harmonized standards and codes of practice. While this approach sets certain expectations and allows the EU AI Act to remain technology-neutral, it also leaves questions about how businesses substantially comply with the EU AI Act. To bridge this gap, a multi-stakeholder group drafted the General-Purpose AI Code of Practice (GPAI Code).

On August 1, 2025, the European Commission issued a formal opinion confirming the GPAI Code is an “adequate tool” to help demonstrate compliance with the EU AI Act.

Why is the Code significant?

This opinion signals that organizations who adopt the GPAI Code may be able to demonstrate good-faith efforts to comply with the relevant provisions of the EU AI Act –  according to the Commission’s website: “The Code of Practice helps industry comply with the AI Act legal obligations…of general-purpose AI models.”

In its opinion, the Commission notes that the Code provides actionable commitments and reporting mechanisms, especially for high-risk models. Additionally, the Commission emphasized that the Code provides a practical framework to demonstrate regulatory compliance.

Following this endorsement, providers of GPAI models can voluntarily sign the Code, which “will reduce their administrative burden and give them more legal certainty than if they proved compliance through other methods.” Still, signatories should be aware that the Code explicitly states that adherence to the Code does not necessarily constitute evidence of compliance with the EU AI Act.

What is a General-Purpose AI Model?

A GPAI model is a component of an AI system with a wide range of possible uses, whether intentional or unintentional. It is important to note that these models are not systems in themselves but are part of AI systems. Additional elements, like user interfaces, are necessary to make these models fully operational systems.

Under Article 3(63) of the EU AI Act, a GPAI model includes those trained on a “large amount of data using self-supervision at scale.”  They can be applied across sectors or tasks, usually without substantial modification, meaning GPAI models “can be integrated into a variety of downstream systems or applications.” Recital 98 of the EU AI Act states that the generality of the model can also be determined by the number of parameters, and “models with at least a billion parameters…should be considered to display significant generality and to competently perform a wide range of distinctive tasks.”

GPAI models are sometimes called “foundation” or “frontier” models, and while they may include large language models (LLMs), they can also process audio, physical, textual or visual data, powering systems like DALL-E, GPT-4, Gemini, LaMDA, SEER, ALIGN, and more.

How are general-purpose AI models regulated?

Under the EU AI Act, the chapter on GPAI both addresses generative AI and outlines some of the most stringent requirements under the Act. However, all requirements for GPAI under the EU AI Act are directed to providers as opposed to deployers.

Providers of GPAI models have a range of obligations under the EU AI act, both directly to supervising authorities and onward to AI providers who integrate the GPAI models into their systems.

Obligations of Providers of GPAI Models

If a provider places a GPAI model on the EU market, or integrates such a model into its own AI system on the EU market, it must:

  • Prepare and maintain technical documentation for regulators. This should include at least a general description of the GPAI model, including the tasks it’s designed to perform and the types of systems in which it can be integrated; acceptable use policies; and information on training process.
  • Prepare and maintain documentation for downstream providers. This should include information that allows the downstream AI system providers to comply with their own obligations under Article 53(1)(b). Similar to the technical documentation, this includes but is not limited to a general description of the model, and a description of its elements and development process.
  • Prepare an EU copyright policy. This policy should establish a means to comply with EU regulations on copyright and related rights.
  • Prepare and publish a summary of training content. Using the template provided by the AI Office, providers of GPAI must share a comprehensive summary of AI training information. This should allow stakeholders to exercise their rights by informing them of the information used to train the GPAI model.
  • Cooperate with relevant authorities and appoint an authorized representative. Providers must also cooperate with relevant authorities, and if they are established outside the EU, appoint an authorized representative located in the EU.

It is notable that under Recital 85, the EU AI Act states that GPAI systems “may be used as high-risk systems by themselves or be components of other high-risk systems.” Therefore, the providers of GPAI systems must work closely with providers of high-risk AI systems to ensure compliance with any requirements of high-risk systems under the Act.

Obligations of Providers of GPAI Models with Systemic Risk

What does “systemic risk” mean?

GPAI models with systemic risk include models that reasonably pose foreseeable negative effects relating to major accidents, disruption of critical sectors, serious consequences to public health and safety, public and economic security, democratic processes, and the dissemination of false or discriminatory content, or other similar effect.

Under Article 51(1) of the EU AI Act, a GPAI model will be classified as having systemic risk if:

  • It has high impact capabilities, or
  • It is designated by the Commission to have high impact capabilities based on the criteria in Annex XIII (i.e., the number of parameters in the model, the size of the data set, the amount of computation used to train the model, etc.).

What are the additional obligations for these models?

In addition to the requirements for all GPAI models, those with systemic risk have additional obligations related to:

  • Model evaluation, assessment, and mitigation of systemic risks;
  • Incident management and reporting; and
  • Cybersecurity protections and technical documentation.

Because there are differences in the obligations between GPAI systems generally and GPAI systems with systemic risk, this classification procedure should be noted by providers of GPAI systems; it is essential to understand where each GPAI model falls, and what requirements the model has under the EU AI Act.

According to Article 52(6), a list of GPAI models with systemic risk will be published and updated by the European Commission, but it has not been published at the time of writing.

What is the General-Purpose AI Code of Practice?

While not legally binding, providers of GPAI models can use the Code of Practice to demonstrate compliance with their obligations under the EU AI Act.

The Code consists of three chapters on 1) transparency, 2) copyright, and 3) safety and security. The first two chapters apply to all providers of general-purpose AI models, providing a way to demonstrate compliance with obligations under Article 53 of the AI Act. The final chapter applies only to general-purpose AI models with systemic risk under Article 55 of the AI Act.

Chapter 1: Transparency

Among other things, this chapter requires signatories to create and maintain documentation for all GPAI models distributed within the EU for up to ten years. There are exceptions for models that are free, open-source, and do not pose systemic risk.

When completing this documentation, signatories must use a standard Model Documentation Form, which includes information on licensing, technical specifications, training data, and other parameters of the GPAI model. The Code encourages publication of this information to promote transparency.

Chapter 2: Copyright

This chapter requires signatories to create and maintain a copyright policy that complies with the EU’s legal standards. This includes, but is not limited to, ensuring that data collected by web crawling is lawfully accessible, and certain websites flagged for copyright infringement are avoided.

Importantly, signatories must designate a contact for copyright holders to submit complaints, along with a process for handling those complaints.

Chapter 3: Safety & Security (GPAI with systemic risk only)

One of the main elements of this chapter is the requirement for signatories to develop a state-of-the-art Safety and Security Framework before releasing any GPAI model categorized as posing a systemic risk. Additionally, systemic risks should be identified and inventoried, and before progressing with development or deployment, the signatories should weigh the relative risks and determine if they are acceptable, among other requirements.

What’s next?

The Code will be monitored and reviewed at regular intervals by the AI Office, and may be updated in response to emerging risks, technological developments, or incidents involving general-purpose AI models.