Info@MetaverseLaw.com

What the EU’s Artificial Intelligence Act will mean for the global AI industry

Picture of the word "AI" surrounded by stars.
Image from: https://futurium.ec.europa.eu/en/european-ai-alliance

On April 21, 2021, the European Commission proposed the Artificial Intelligence Act (AIA), a regulatory and legal framework for artificial intelligence systems.[1] On December 5, 2022, the Council of the European Union adopted its general approach to the AIA, which incorporated changes to the regulation.[2][3] Germany announced support for the AIA, but “sees some need for improvements.”[4]

In similar fashion, the Federal Trade Commission (FTC) published an article on April 19, 2021, calling for the integration of truth, fairness, and equity into the use of AI.[5] Later that year, the FTC announced its consideration to initiate rulemaking to, in part, ensure that algorithmic decision-making does not result in unlawful discrimination.[6]

Given this growing international interest in regulating AI systems, it is important to note that the AIA was drafted to have an extraterritorial effect much like the EU’s General Data Protection Regulation (GDPR). The GDPR became a global model for data protection laws across the world, including the California Consumer Privacy Act (CCPA), and the AIA could similarly establish a worldwide standard for AI regulation – especially if the FTC is considering initiating rulemaking for algorithmic systems.

So, while the draft AIA will likely see more changes, the proposed regulation appears sufficiently settled for analysis of its requirements and potential global effects. This article provides an overview of the major legal takeaways from the AIA, including which AI systems are outright prohibited and which are less regulated.

Background

As early as 2017, the European Council called for a “sense of urgency to address emerging trends,” including “issues such as artificial intelligence . . ., while at the same time ensuring a high level of data protection, digital rights and ethical standards.”[7]

On July 16, 2019, Ursula von der Leyen, then-candidate for President of the European Commission, announced her political guidelines for the 2019-2024 Commission, in which she called for legislation for a coordinated EU approach on the human and ethical implications of AI. [8] Following this announcement, the Commission published a white paper on AI, “A European approach to excellence and trust.”[9] This paper sets out policy options for how to achieve the goal of promoting AI adoption while also addressing the risks associated with certain uses of AI.

The AIA draft proposed on April 21, 2021, delivers on now-President von der Leyen’s political commitments announced in 2019 and the white paper’s stated objectives. The result is a legal framework presenting a balanced, proportionate regulatory approach that seeks to address the risks and problems with AI, without unduly constraining or hindering AI development on the market.

To whom does the AIA apply?

Like the GDPR’s territorial scope in Article 3, the AIA’s scope in Article 2 covers providers that place AI systems on the EU market, or put them into service in the EU, irrespective of whether those providers are established within the EU.

A “provider” is any natural or legal person, public authority, agency, or other body that develops an AI system or that has an AI system developed. An AI is “put into service” by a provider when the provider supplies an AI system for first use directly to a user or for the provider’s own use on the EU market. An AI is placed “on the market” by a provider when the AI is distributed or used on the EU market in the course of a commercial activity, whether in return for payment or free of charge.

Taken together, this means that the AIA does not apply to private, non-professional use, but anyone supplying, using, or distributing, AI systems on the EU market to users or for their own purposes may fall within the regulation’s scope.

Can the EU’s proposed AI regulation apply to AI creators and companies outside of the EU?

Yes.

The AIA applies to AI systems used by natural or legal persons, including public authorities, agencies, or other bodies, who are physically present or established within the Union. However, the regulation’s reach extends beyond the EU’s borders.

The AIA covers natural or legal persons, including public authorities, agencies, or other bodies, who are physically present or established “in a third country, where the output produced by the system is used in the Union.” The AIA also applies to any natural or legal person that makes an AI system available on the EU market.

This extraterritorial scope, like the GDPR’s, means companies outside of the EU should take care in considering how and where their AI systems are used.

Does the EU’s proposed AI regulation provide data subjects with additional rights?

No.

The AIA does not provide additional rights to data subjects. Instead, as a piece of product regulation, the AIA takes aim at the AI systems themselves, by either prohibiting a particular AI system or requiring it to conform to a list of obligations.

That said, the AIA recognizes the need for new AI technologies to be “developed and functioning according to Union values, fundamental rights, and principles.” This includes rights provided under the GDPR, such as an individual’s right to restrict processing (Article 18) and the right of deletion / erasure (Article 17). Furthermore, a controller using an AIA-covered AI system must satisfy their GDPR notice obligations to data subjects (Articles 12 – 14).

Does the EU’s proposed AI regulation cover all algorithm-based systems?

No.

The AIA draft proposed on April 21, 2021, defined “AI system” so broadly that it seemed to encompass most software, which prompted EU Member States to propose a narrower definition.[10] The version adopted by the Council of the EU on December 5, 2022, recognizes the need to more narrowly define “AI system” to “provide sufficiently clear criteria for distinguishing AI from more classical software systems.”

Thus, the current AIA draft defines “AI system” to target systems developed through machine learning and logic- and knowledge-based approaches. In addition, an AI system using one of these approaches must operate with elements of autonomy and, based on machine and/or human-provided data and inputs, infer how to achieve a given set of objectives.

This definition is recognized by the Council as a “compromise” between those calling for a broader definition and those calling for a narrower one, and as such, it remains subject to change.

Does the proposed AI regulation treat all covered AI systems equally?

No.

The regulation uses a risk-based approach to separate covered AI systems into four categories:

Unacceptable Risk

The AIA contains a limited list of particularly harmful AI systems found to contravene EU values. Because the risk of harm is unacceptably high, these AI systems are prohibited under the regulation. This list includes:

  1. An AI system that subliminally manipulates a person, thereby materially distorting the person’s behavior in a manner that causes or is reasonably likely to cause physical or psychological harm.
  2. An AI system that exploits the vulnerabilities of individuals due to age, disability, or socioeconomic status, resulting in physical or psychological harm.
  3. An AI system that analyzes individuals to create a social score, which leads to detrimental or unfavorable treatment unrelated to the contexts in which the data was originally generated or collected.
  4. Some uses of remote biometric identification for law enforcement purposes in publicly accessible spaces (e.g., facial recognition technology).

A full list of prohibited AI systems can be found in Article 5 of the AIA.

High-Risk

Most of the AIA’s legal obligations and burdens fall on AI systems deemed to be “high-risk” under the regulation. An AI system is considered “high-risk” under the AIA if the AI system is itself a product or is intended to be used as a safety component of a product, and the product is subject to an existing third-party conformity assessment (e.g., medical devices, machinery, engine-powered vehicles, certain stand-alone AI systems in employment, education, and immigration, etc.).

A high-risk AI system can only be used in the EU or put on the EU market if the AI system complies with the AIA’s legal obligations. This includes:

  1. A risk management system (Article 9).
  2. Adherence of training, validation, and testing data to quality criteria (Article 10).
  3. Technical documentation describing how the AI system complies with applicable rules, including law enforcement purposes (Article 11).
  4. Record-keeping requirements to ensure traceability of the AI system’s functions (Article 12).
  5. Transparency requirements to enable users to understand the system’s output and use (Article 13).
  6. Providing adequate human oversight of the AI system’s operations (Article 14).
  7. Ensuring the AI system achieves appropriate levels of accuracy, robustness, and cybersecurity (Article 15).

The AIA provides further obligations on AI system developers, which include:

  1. Maintaining a quality management system (Article 17).
  2. Ensuring the system undergoes a conformity assessment procedure (Article 19).
  3. Maintaining automatically generated logs (Article 20).
  4. Taking corrective actions if the system is found not to conform with the AIA (Article 21).
  5. A duty to notify serious incidents or malfunctions to national competent authorities (Article 22).

Limited Risk

Title IV of the AIA creates new transparency obligations for certain AI systems. For example, users of emotion recognition systems or biometric categorization systems must be informed of the operation of the system. In addition, users of an AI system that generates deep fake images or content must be informed that the content has been artificially generated or manipulated. Similar to the GDPR, these disclosures must be provided to the user in a clear and distinguishable manner no later than the user’s first interaction or exposure to the AI system.

Minimal / No Risk

If an AI system does not fall into one of the above categories, then it can be developed and used in the EU subject to existing regulation without any additional legal obligations under the AIA.

That said, the Council encourages developers of AI systems in this category to “create codes of conduct intended to foster the voluntary application of the requirements applicable to high-risk AI systems, adapted in light of the intended purpose of the systems and the lower risk involved.”[11]

Are the enforcement penalties harsher than the GDPR?

Yes.

Non-compliance with the AIA’s list of prohibited AI systems in Article 5 could be subject to an administrative fine of up to €30 million or, if the offender is a company, up to 6% of its worldwide annual turnover for the preceding financial year, whichever is higher. By contrast, serious GDPR violations can result in a fine of up to €20 million or 4% of a company’s worldwide annual revenue from the preceding financial year, whichever is higher.

For less serious infringements under the AIA, the offender could be subject to administrative fines of up to €20 million or, if the offender is a company, up to 4% of its total worldwide annual turnover for the preceding financial year, whichever is higher. This too is higher than the GDPR’s fines for less severe infringements.

What are the next steps for the AIA?

The Parliament is scheduled to vote on the current draft of the AIA by the end of March 2023. After this, Member States, the Parliament, and the European Commission will begin discussions of the AIA in April 2023. This timeline could lead to an adoption of the AIA by the end of 2023.


[1] https://artificialintelligenceact.eu/wp-content/uploads/2022/05/AIA-COM-Proposal-21-April-21.pdf

[2] https://www.consilium.europa.eu/en/press/press-releases/2022/12/06/artificial-intelligence-act-council-calls-for-promoting-safe-ai-that-respects-fundamental-rights/

[3] https://data.consilium.europa.eu/doc/document/ST-14954-2022-INIT/en/pdf

[4] https://data.consilium.europa.eu/doc/document/ST-14954-2022-ADD-1/en/pdf

[5] https://www.ftc.gov/business-guidance/blog/2021/04/aiming-truth-fairness-equity-your-companys-use-ai

[6] https://www.reginfo.gov/public/do/eAgendaViewRule?pubId=202210&RIN=3084-AB69

[7] https://www.consilium.europa.eu/media/21620/19-euco-final-conclusions-en.pdf

[8] https://op.europa.eu/en/publication-detail/-/publication/43a17056-ebf1-11e9-9c4e-01aa75ed71a1

[9] https://commission.europa.eu/publications/white-paper-artificial-intelligence-european-approach-excellence-and-trust_en

[10] https://www.wired.com/story/artificial-intelligence-regulation-european-union

[11] https://artificialintelligenceact.eu/wp-content/uploads/2022/05/AIA-COM-Proposal-21-April-21.pdf