Close
Login to MyACC
ACC Members


Not a Member?

The Association of Corporate Counsel (ACC) is the world's largest organization serving the professional and business interests of attorneys who practice in the legal departments of corporations, associations, nonprofits and other private-sector organizations around the globe.

Join ACC

Editor's note: This article was updated to reflect recent developments; the original version of this article was published on June 21, 2023.

The new EU Artificial Intelligence Act adopted by the European Parliament is a landmark development in the fast-moving landscape of artificial intelligence. In-house counsel need to help their organizations comply.

In a context where governments compete to foster AI innovation and regulate its use, the new EU regulation sets the first major regulatory framework, with prohibitions, requirements, and substantial penalties.

  • On March 13, 2024, the European Parliament voted to enact the EU AI Act. Once the Act is formally endorsed by the EU Council of Ministers, it will enter into force 20 days after official publication, likely by June 2024.
  • Separately, the European Commission has established the EU AI Office, intended to help implement the AI Act and collaborate with member states to ensure that AI is safe and trustworthy.
  • The Act will become applicable 24 months after entering into force, except for the following: 
    • Bans on prohibited practices will begin after six months;
    • Codes of practice will be developed by the EU AI Office within nine months;
    • Regulations on general-purpose AI models will come into effect after 12 months; and
    • Obligations for high-risk AI systems will go into effect after 36 months.

In-house counsel must help their organization understand these developments and navigate the rapidly evolving AI landscape.

Learn below about key features of the AI Act and how in-house counsel can help their organization prepare.

 

Definition of AI

Under the definition approved by the EU Parliament, an “AI system” means a “machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments[.]”

 


Different rules for different risk levels

The AI Act introduces rules for various categories of AI systems, categorized by risk level – unacceptable risk (prohibited by the AI Act), high risk (tightly regulated), limited risk (fewer requirements), and minimal or no risk. Below are key features:

1. “Unacceptable risk” AI Systems

Systems considered a threat to people are deemed an “unacceptable risk” and will be banned. This would include systems such as:

  • Systems that deploy subliminal, manipulative or deceptive techniques that distort people’s behavior in a way that is likely to cause significant harm to themselves or others;
  • Systems that exploit people’s vulnerabilities--due to age, disability, or specific social or economic situation--with the effect or objective of distorting their behavior in a way that is likely to cause significant harm to themselves or others;
  • Systems for social scoring;
  • Systems to predict the risk of persons' commission of criminal or administrative offenses;
  • Systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage;
  • Systems to infer persons’ emotions in workplaces or educational institutions, unless intended for medical or safety reasons.
  • Biometric categorization systems that categorize natural persons to deduce or infer sensitive or protected attributes or characteristics, but with certain exceptions for law enforcement; and
  • Real-time remote biometric identification systems in publicly accessible spaces for law enforcement, with exceptions for locating human-trafficking victims, preventing imminent threats to life or safety, or identifying criminal suspects.

2. "High risk” AI systems

This category includes AI systems intended for use as a safety component of a product (or that are themselves a product) covered by EU product safety legislation listed in Annex I of the Regulation, where the AI system or the product must undergo a third-party assessment of conformity with such legislation before being placed on the market or put into service.

The category also includes AI systems in eight areas listed in Annex III of the Regulation, if the system poses a significant risk of harm to the health and safety or the fundamental rights of natural persons, and, where the AI system is used as a safety component of a critical infrastructure, to the environment. Such systems will need to be registered in a European database before being placed in the market or put in service (art. 51). The eight areas listed at Annex III are:

  • biometric identification and categorization of natural persons;
  • management and operation of critical infrastructure;
  • education and vocational training;
  • employment, workers management, and access to self-employment; 
  • access to and enjoyment of essential private services and essential public services and benefits;
  • law enforcement;
  • migration, asylum, and border control management; and
  • administration of justice and democratic processes.

AI systems referred to in Annex III will always be considered high-risk if they profile natural persons. Otherwise, they will not be considered high risk unless they pose a significant risk of harm to health, safety, or fundamental rights, where the systems:

  • perform a narrow procedural task;
  • are intended to improve the result of a previously completed human activity;
  • detect decision-making patterns or deviations from those patterns, rather than replacing or influencing human assessments; or
  • perform a preparatory task to an assessment relevant to the use cases listed in Annex III.

High risk systems will be subject to various requirements, such as establishing, implementing, maintaining, and documenting a risk management system (art. 9); meeting certain data governance criteria (art. 10); preparing technical documentation (art. 11); ensuring record-keeping/event logging capabilities (art. 12); ensuring transparency and the provision of information to allow users to deploy the systems appropriately (art. 13); providing for effective human oversight (art. 14); and complying with accuracy, robustness, and cybersecurity requirements (art. 15).

3. “Limited risk” AI systems

These systems will be subject to transparency requirements (such as informing the user that an AI system is used).

Notably, deployers of AI systems that generate deepfakes, including images, audio, or video, will have to disclose that the content has been artificially generated or manipulated.

Also, the use of AI systems to generate published text on matters of public interest must be disclosed unless the content undergoes human review or editorial control.

 

General-purpose AI models

General-purpose AI models (GPAI) are subject to special requirements and are defined as a model that “displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are released on the market[.]”

In practice, this includes broad, public-facing AI systems, such as Open AI’s ChatGPT and Google’s Gemini, that have many possible uses, rather than being designed for specific tasks.

GPAI have specific transparency requirements set out in Article 52. GPAI models posing systemic risks have additional requirements, including performing model evaluation, assessing risk and taking mitigation measures, ensuring cybersecurity protection, and reporting serious incidents to the EU AI Office and national authorities.

 


Who will be subject to the regulation?

Whether or not your organization is based in the EU, it may fall within the scope of the AI Act. Under Article 2, the regulation will apply to:

  • Providers who place or put into service AI systems in the EU, regardless of the provider’s location;
  • Deployers of AI systems located or established within the EU; 
  • Providers and deployers located or established outside the EU, where either Member State law applies by virtue of a public international law or where the output produced by the system is intended to be used in the EU.
  • Importers and distributers of AI systems within the EU;
  • Product manufacturers who place or put into service in the EU an AI system together with their product and under their own name or trademark; 
  • Authorized representatives of providers not established within the EU; and
  • Persons located in the EU affected by covered AI systems.

 

Substantial fines and rights to lodge complaints

The EU AI Act will impose substantial administrative penalties for non-compliance with the Act’s requirements. Under the final text adopted by the EU Parliament:

  • For non-compliance with prohibitions of “unacceptable risk” AI systems, the offender will be subject to administrative fines of up to 35 million Euros or, if the offender is a company, up to 7% of its total worldwide annual turnover for the preceding financial year, whichever is higher. 
  • For non-compliance with the Act’s obligations related to operators or notified bodies, or to transparency and the provision of information to users, the offender would be subject to administrative fines of up to 15 million Euros or, if the offender is a company, up to 3% of its total worldwide annual turnover for the preceding financial year, whichever is higher.
  • For non-compliance with requirements on general-purpose AI models, a provider would be subject to administrative fines of up to 15 million Euros or up to 3% of its total worldwide annual turnover for the preceding financial year, whichever is higher. Fines would be imposed one year after the regulation takes effect, to allow providers time to adapt to the EU Commission’s finding of violation.
  • The supply of incorrect, incomplete or misleading information to notified bodies and national competent authorities in reply to a request will be subject to administrative fines of up to 7.5 million Euros or, if the offender is a company, up to 1% of its total worldwide annual turnover for the preceding financial year, whichever is higher.
  • Small- and medium-sized enterprises and start-ups will be subject to the above fines up to the percentage or amount listed, whichever is lower.

The Act also provides a right for natural persons or groups of natural persons to lodge complaints with a national supervisory authority; however, the EU Commission has the exclusive power to supervise and enforce the provisions of the Act pertaining to general-purpose AI models.


 

How can in-house counsel help their business prepare?

  • Learn more about the upcoming requirements of the EU AI Act.
  • Review how your organization uses or plans to use AI systems.
  • Map out your organization’s AI uses in light of the categories defined by the EU AI Act.
  • If you haven’t started yet, develop internal guidelines and corporate policies on the use of AI, for example regarding the incorporation of AI systems in the company’s products, the use of AI in internal processes such as recruitment and Human Resources decisions, or employees’ and vendors’ potential use of AI to create content.
  • Consider the ethical implications of how your organization uses AI tools, and what safeguards may be needed to mitigate legal and reputational risks.

 

Check out a selection of resources

From the ACC Library:

External resources:

 

Connect with in-house peers

Join the ACC IT Privacy and eCommerce Network (ACC members only)

 

Region: European Union, Global
The information in any resource collected in this virtual library should not be construed as legal advice or legal opinion on specific facts and should not be considered representative of the views of its authors, its sponsors, and/or ACC. These resources are not intended as a definitive statement on the subject addressed. Rather, they are intended to serve as a tool providing practical advice and references for the busy in-house practitioner and other readers.
ACC

This site uses cookies to store information on your computer. Some are essential to make our site work properly; others help us improve the user experience.

By using the site, you consent to the placement of these cookies. For more information, read our cookies policy and our privacy policy.

Accept