Skip to content

EU Legislators reach agreement on AI Digital Omnibus Regulation

On 7 May 2026, the European Parliament and Council reached a provisional agreement on the AI Digital Omnibus Regulation (2025/0359/EU), (the “Proposal”) which amends the EU Artificial Intelligence Act (“AI Act”).  The provisional agreement now needs to be formally adopted by both Parliament and Council before it can enter into law.

The provisional agreement was reached following pressure from various stakeholders to agree the final text ahead of the high-risk AI system rules and other provisions of the AI Act coming into force on 2 August 2026.  The Proposal aims to simplify and address some of implementation challenges of the AI Act.  We have been tracking the proposal since its publication by the EU Commission last November 2025 (see here), and recently considered the negotiating positions of the EU legislators (see here).

Although the final legislative text is not yet available, both Parliament and Council have outlined the key changes reflected in their agreement.  This article provides an overview of those key changes.

Extension of implementation timeline for high-risk AI systems

One of the most eagerly anticipated amendments to the AI Act is the extension of the implementation timeline for high-risk AI system rules.  This extension is needed in order to align implementation with the availability of support tools for businesses, including the necessary technical standards, needed to clarify the application of the rules.  It has now been agreed that the high-risk AI system rules will come into force as follows:

  • 2 December 2027 for AI systems classified as high-risk pursuant to Article 6(2) and Annex III of the AI Act (eg, AI systems involving biometrics, critical infrastructure, education, employment, essential services, law enforcement, justice and border management); and
  • 2 August 2028 for AI systems classified as high-risk under Article 6(1) and Annex I of the AI Act (eg, AI systems embedded into products that are covered by EU sectoral legislation on safety and market surveillance).

Prohibition on non-consensual intimate images and child sexual abuse material

The introduction of a ban on “nudifier” apps has been a priority issue for the co-legislators following the “Grok” chatbot scandal earlier this year.  The co-legislators had previously agreed to prohibit AI systems that are capable of producing explicit images or videos of identifiable individuals without their consent, but there was some disagreement as to the scope of this prohibition, including in respect of child sexual abuse material (“CSAM”).

The co-legislators have now agreed to include a new prohibited practice under Article 5 of the AI Act, which bans AI systems that are capable of generating non-consensual sexual and intimate content or CSAM.  Rapporteur Arba Kokalari confirmed at a press conference following the announcement of the provisional agreement that the European Commission will develop guidelines to further clarify the scope of this ban.  There has also, reportedly, been some discourse as to whether this prohibition will extend to bikini photos, and it may be the case that this issue will be need to assessed on a case-by-case basis.

Narrowing of “high-risk” classification

Pursuant to Article 6 of the AI Act, an AI system is classified as high-risk where: (i) it is intended to be used as a safety component of a product or is a product covered by sectoral legislation under Annex I and (ii) the product is required to undergo a third-party conformity assessment under those laws.  In this regard, the co-legislators have agreed to narrow the scope of what constitutes a “safety component”.  This means that products with AI functions that merely assist users or optimise performance will not automatically face high-risk obligations, if their failure or malfunction does not create health or safety risks (eg, connected home appliances).

Reconciling sectoral legislation

The deadlock last week, which led to the collapse of the trilogue negotiations, centred around the Parliament’s proposal to essentially exclude from the application of the AI Act those AI systems that are embedded in products already regulated by EU sectoral laws (eg, medical devices, toys, connected cars, industrial machinery), in order to avoid double-regulation.  This proposal was resisted by the Council, echoing concerns raised by civil society groups and other stakeholders that such an exemption would weaken fundamental rights protections, on the basis that EU sectoral laws do not adequately address nor protect against AI-related risks.

In the provisional agreement reached yesterday, the co-legislators agreed to only carve out machinery regulation from direct applicability of the AI Act (ie, to move it from Annex I Section A to B), meaning that such products only need to comply with sectoral safety rules.  The Commission has also been empowered to adopt delegated acts under the machinery regulation which would add health and safety requirements in respect of AI systems that are classified as high-risk under the AI Act.  This solution is aimed at addressing any possible overlap between the high-risk requirements under the AI Act and those under sectoral legislation.

With regard to the interplay of the AI Act’s rules with other sectoral legislation, a compromise was reached which aims to reduce the risk of double-regulation, and clarify how the AI Act applies to AI systems embedded in regulated products, especially where the sectoral legislation and AI Act include overlapping obligations.  The provisional agreement also includes a new obligation on the Commission to issue guidance to economic operators of high-risk AI systems covered by sectoral legislation on how to comply with high-risk rules under the AI Act in a manner that minimises the compliance burden.

Other targeted amendments

The Parliament and the Council have also agreed to the following targeted amendments to the AI Act:

  • Processing of special categories of personal data for bias detection and correction: The co-legislators agreed to reinstate the standard of strict necessity for processing of special categories of personal data to detect and correct biases for both high-risk and non-high-risk AI systems, subject to adequate safeguards.
  • Transparency and watermarking obligations: While the Commission sought to postpone the timeline for application of watermarking obligations on providers of certain AI systems under Article 50(2) by six months, the co-legislators ultimately compromised to extend the implementation deadline for a shorter period, namely until 2 December 2026.  All other transparency obligations will continue to apply from 2 August 2026.
  • EU database registration requirements: The provisional agreement reinstates the obligation in the AI Act (previously removed in the Proposal) for providers to register certain high-risk AI systems in the EU database, even where they have concluded that such systems are not high-risk in accordance with the AI Act (ie, where they do not pose a significant risk of harm to individuals’ health, safety or fundamental rights).
  • Small mid-cap enterprises: The co-legislators aligned to extend the regulatory privileges afforded to small and medium-sized enterprises in Article 99 of the AI Act to small mid-cap enterprises in order to support their growth.
  • AI regulatory sandboxes: The provisional agreement postpones the deadline for national competent authorities to establish at least one AI regulatory sandbox until 2 August 2027.
  • AI Governance: The provisional agreement confirms the exclusive competence of the EU AI Office for the supervision of certain AI systems integrating general-purpose AI models, where the model and system are developed by the same provider, and those embedded into very large online platforms and very large search engines (mirroring the approach in the Digital Service Act).  However, it also facilitates exceptions where national authorities will remain competent.

Next steps

The provisional agreement must now be formally adopted by the Parliament and Council.  The co-legislators intend to adopt it before the 2 August 2026, the current start date for the rules on high-risk AI systems under the AI Act.

Takeaways for businesses

While the latest text of the Proposal (as agreed yesterday) has not yet been published, businesses now have a clear indication of both the scope of the amendments to the AI Act and when the high-risk AI system and watermarking rules will come into force.  Despite the delayed implementation of these rules, businesses should continue with their compliance efforts, and finalise their AI governance frameworks as soon as possible.

It is also noteworthy that some further limited amendments will likely be included in the final text which have not yet been announced by the co-legislators (eg, AI literacy obligations), so businesses should stay alert to developments in this space, and be ready to adapt their compliance efforts once the final text is adopted.

Contact us

For more information or assistance with AI compliance, please contact any member of our Technology and Innovation Group or your usual Matheson contact.

© 2026 Matheson LLP | All Rights Reserved