Empty Link Skip to Content

EU AI Act Finalised

Please note this article was updated on 15 July 2024 following publication of the AI Act in the Official Journal of the European Union ("OJEU").

On 21 May 2024, the EU Council approved the EU Artificial Intelligence Regulation (the "AI Act").  This marked the final step in the EU legislative process. The final text of the AI Act (EU Regulation 2024/1689) was published in the OJEU on 12 July 2024. The AI Act will enter into force twenty days after its publication (i.e. on 1 August 2024).

What is being regulated?

The AI Act defines an "AI System" as "a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence real or virtual environments." 

The AI Act also introduces dedicated rules for General Purpose AI (“GPAI”) models, which are defined as a model that is "“trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of performing a wide range of distinct tasks…and that can be integrated into a variety of downstream systems or applications".

Scope of the AI Act - who is impacted?

The AI Act will apply to different players across the AI distribution chain, including the following:

  • AI providers – those who develop AI systems or have them developed for them;
  • AI deployers – those who use AI systems (except personal use);
  • Importers and distributors of AI;
  • AI product manufacturers;
  • Authorised representatives of AI providers who are not established in the EU; and
  • Affected persons located in the EU.

The AI Act has extra-territorial scope, and may apply to businesses not established in the EU. The AI Act will apply to providers located within the EU or in a third country, in circumstances where they make an AI system or GPAI model available on the EU market. In addition, where only the output generated by the AI system is used in the EU, the AI Act will apply to the provider and deployer of the AI system. 

Non-EU providers of GPAI models and high-risk AI systems are required to appoint an AI representative in the EU to act as a contact point for EU regulators.

Risk-based approach

The EU has taken a risk-based approach to the regulation of AI. The higher the risk of harm to society, the stricter the rules. The AI Act establishes four categories of AI systems based on the probability of an occurrence of harm and the severity of that harm:

  1. Prohibited AI Systems – These are AI systems that pose an unacceptable level of risk to individuals' safety, rights, or fundamental values. These systems are banned for use in the EU under the AI Act. Examples include social scoring, compiling facial recognition databases, and real-time biometric identification in publicly accessible spaces (subject to certain exceptions).
  2. High-Risk AI Systems – AI systems that fall under this category have a high potential to cause significant harm or infringement of rights. They require strict regulation and oversight to mitigate risks.  They include AI systems used in critical infrastructures, education, employment, essential private and public services, law enforcement, border control management and administration of justice (as set out in Annex III of the AI Act). They also include AI systems intended to be used as a safety component of a regulated product (see Annex I of the AI Act for list of laws regulating these products).
  3. Limited Risk AI Systems – These AI systems present lower risks.  They still need to adhere to certain safeguards, however, the regulatory requirements for these systems are less stringent.  An example of a limited risk AI system is an AI-powered customer service chatbot used to provide automated responses to customer questions.
  4. Minimal Risk AI Systems – The AI systems in this category pose minimal risks to individuals' rights, safety, or societal values and are therefore subject to lighter regulatory burdens.  For example, basic email filters that classify messages as spam, with a low likelihood of negative impact.

GPAI Models

The AI Act provides specific rules for (i) GPAI models and for (ii) GPAI models that pose “systemic risk”. GPAI models not posing systemic risks will be subject to limited requirements, such as with regard to transparency. However, providers of GPAI models that pose systemic risk will be subject to increased obligations, including performing model evaluation, assessing and mitigating possible systemic risks, ensuring an adequate level of cybersecurity protection, and reporting serious incidents to the AI Office and, as appropriate, national authorities.

A new governance structure

To ensure proper enforcement of the new rules, several governing bodies are being established, including:

  • An EU AI Office within the EU Commission to enforce the common rules across the EU. The EU Commission has confirmed that this AI Office will not affect the powers of the relevant national authorities and other EU bodies responsible for supervising AI systems;
  • scientific panel of independent experts to support the enforcement activities;
  • An AI Board with Member States’ representatives to advise and assist the EU Commission and Member States on consistent and effective application of the AI Act; and
  • An advisory forum for stakeholders to provide technical expertise to the AI Board and the EU Commission.

Provider obligations

Providers of high-risk AI systems must, among other things:

  • ensure the AI systems are compliant with the AI Act;
  • have a quality management system in place;
  • keep specific documentation;
  • keep the logs automatically generated by the high-risk AI system;
  • carry out conformity assessments and prepare declarations of conformity for each high-risk AI system; and
  • comply with registration obligations.

Deployer obligations

Where businesses are acting as deployers of high-risk AI systems, they are subject to the following obligations:

  • take appropriate technical and organisational measures to ensure compliance with provider instructions;
  • allocate human oversight to natural persons who are competent, properly qualified and resourced;
  • ensure input data is relevant and sufficiently representative (to the extent the deployer exercises control over it);
  • monitor the operation of the high-risk AI system and report incidents to the provider and relevant national supervisory authorities;
  • keep records of logs generated by the high-risk AI system (if under the deployer's control) for at least six months;
  • cooperate with relevant national competent authorities; and
  • complete a fundamental rights impact assessment before using a high-risk AI system.

Transparency obligations

Providers and deployers of certain AI systems and GPAI models are also subject to transparency obligations to:

  • ensure that users are aware that they are interacting with AI;
  • inform users when emotion recognition and biometric categorisation systems are being used; and 
  • label AI-generated content as such. 

Penalties

The AI Act imposes significant fines for non-compliance with its obligations, which are split into three tiers:

  • up to €35 million or 7% of total worldwide turnover, whichever is higher, for non-compliance with the provisions on prohibited AI practices;
  • up to €15 million or 3% of total worldwide turnover, whichever is higher, for non-compliance with specified obligations of various operators of AI systems and infringements of the AI Act (including infringement of the rules on GPAI); and
  • up to €7.5 million or 1% of total worldwide turnover, whichever is higher, for the supply of incorrect, incomplete or misleading information to notified bodies and national competent authorities.

However, for small and medium-sized enterprises (“SMEs”), including start-ups, the AI Act allows for the lower scale of penalties to be applied and requires that the interests of SMEs and their economic viability be taken into account when imposing fines.

When will the AI Act come into force?

The AI Act was published in the OJEU on 12 July 2024, triggering a staggered implementation time-frame over the next three years. The AI Act enters into force on 1 August 2024, and will be fully applicable 24 months after entry into force, namely, from 2 August 2026, with the exception of the following provisions:

  • Rules on Prohibited AI Systems – applicable on 2 February 2025 (i.e. 6 months after the AI Act enters into force).
  • AI Office Codes of Practice must be made available for GPAI models by 2 May 2025.
  • Rules for GPAI Models – applicable on 2 August 2025 (i.e. 12 months after the AI Act enters into force).
  • Penalties for breaching obligations (with the exception of fines for providers of GPAI models) – applicable on 2 August 2025.
  • The EU Commission must provide guidelines specifying the practical implementation of the AI Act, together with a comprehensive list of practical examples of use cases of AI systems that are high-risk and not high-risk by 2 February 2026.
  • The AI Act in general, including for high-risk AI systems (see Annex III for the list of high-risk AI systems) – applicable on 2 August 2026 (i.e. 24 months after the AI Act enters into force).
  • High-risk AI systems designed to be used as part of safety components in regulated products (see Annex I for the list of laws governing these products) – applicable on 2 August 2027 (i.e. 36 months after the AI Act enters into force).

Grace period and exemption for certain systems placed on the EU market before the AI Act enters into force

The AI Act includes a grace period for certain systems that have been placed on the market or put into service in the EU before the AI Act comes into force.

In particular, operators of AI systems that are components of large-scale IT systems established by legal acts listed in Annex X, and that have been placed on the EU market before 2 August 2027, have until 31 December 2030 to comply with the AI Act.  However, the prohibition on certain AI systems under Art. 5 of the AI Act still applies, whereby these systems must no longer be used after six months of the AI Act's entry into force (i.e. by 2 February 2025) (per Art. 111(1)).

Operators of high-risk AI systems that have been placed on the EU market before 2 August 2026, and are not intended for use by public authorities, will not be regulated by the AI Act unless there is a significant design change after that date (e.g. changes in the AI system's intended purpose). Again, however, with the exception of prohibited systems under Art. 5 of the AI Act. However, if the high-risk AI system offered in the EU is intended to be used by public authorities, the providers and deployers will need to comply with the rules by 2 August 2030, regardless of whether there has been a significant design change or not (per Art. 111(2)).

In addition, there is a 24 month grace period for providers of GPAI models that have been placed on the EU market before 2 August 2025. Enforcement for providers of such GPAI models begins on 2 August 2027 (per Art. 111(3)).

How to prepare?

With the AI Act coming into force on 1 August 2024, and the transitional periods fast approaching, it is crucial for businesses that develop and use AI systems to start taking active steps to prepare for the new legislative regime and its onerous obligations.  Companies should undergo a complete review of their practices to identify any existing or proposed AI elements and ensure that the procedures and measures implemented align with the requirements of the AI Act. 

Contact Us

Matheson's Technology & Innovation Group is available to guide you through the complexities of understanding your organisation's obligations under the AI Act.  For more information, or if you would like assistance with putting in place an AI strategy, please contact any member of our  Technology and Innovation Group or your usual Matheson contact.