Skip to content

EU AI Digital Omnibus Proposal – Where are we now?

The European Parliament and the Council of the European Union have been busy negotiating the final text of the EU AI Digital Omnibus Regulation (the “Proposal”) ahead of the application of the EU Artificial Intelligence Act (“AI Act”) on 2 August 2026.  The Proposal seeks to simplify and address some of implementation challenges of the AI Act (previously discussed here and here).

On 13 March 2026, the Council adopted its negotiating position with regard to the Proposal, shortly followed by the Parliament agreeing its position by plenary vote on 26 March 2026.  Both legislative bodies put forward a number of amendments to the Proposal which aim to address some of its most controversial aspects, including concerns that certain amendments risk undermining the protection of individuals’ fundamental rights and introducing legal uncertainty.

The legislative process is expected to conclude in the next couple of months, with a target deal on a political agreement by April and formal adoption in June 2026.  This timeline is faster than the EU’s typical lawmaking process, but is necessary in order to finalise the law before the remaining provisions of the AI Act are due to come into force on 2 August 2026.

This article provides an overview of the key amendments to the Proposal introduced by the Parliament and the Council, and the different approaches adopted by each of these EU bodies.

Prohibition on non-consensual intimate images and child sexual abuse material

Both the Council and Parliament call for a new prohibited practice under Article 5 of the AI Act, targeting AI systems capable of altering, manipulating or artificially generating realistic, sexually explicit images or videos of identifiable individuals without their consent.

The Council’s text also expressly covers prohibition of the generation of child sexual abuse material.  Both positions exclude AI systems that incorporate effective technical safeguards to prevent users from generating such content.

Extension of implementation timeline for high-risk AI systems

The Proposal sought to extend the deadline for obligations of providers and deployers of high-risk AI systems coming into force by up to a maximum of 16 months, to align with the availability of support tools, including the necessary technical standards.

The EDPB and EDPS warned in their joint opinion that this amendment would increase the number of high-risk AI systems that could be put on the market without being subject to the AI Act, and create uncertainty as to when the rules would take effect.

Both the Parliament and the Council have maintained the proposed extension of the implementation deadline for high-risk AI systems, but on the basis of fixed deadlines.  Accordingly, the deadlines proposed are 2 December 2027 for AI systems classified as high-risk pursuant to Article 6(2) and Annex III of the AI Act (eg, AI systems involving biometrics, critical infrastructure, education, employment, essential services, law enforcement, justice and border management), and 2 August 2028 for AI systems classified as high-risk under Article 6(1) and Annex I (eg, AI systems embedded in products that are covered by EU sectoral legislation on safety and market surveillance). 

Extension of implementation timeline for transparency obligations

Providers of AI systems, including general-purpose AI systems, generating synthetic audio, image, video or text content, are subject to watermarking and labelling obligations under Article 50(2) of the AI Act, which are due to become applicable from 2 August 2026.  The Proposal introduced a six-month transitional period for the application of these obligations (ie, until 2 February 2027) to allow providers of such AI systems that have been placed on the market before 2 August 2026 to adapt their practices within a reasonable period of time without disrupting the market.  The Council agreed with this amendment.  However, the Parliament has suggested shortening the transitional period to three months (ie, until 2 November 2026).

AI literacy

The Proposal sought to remove the obligation imposed on providers and deployers of AI systems under Article 4 of the AI Act to ensure a sufficient level of AI literacy of their staff and others, such as contractors, operating AI systems on their behalf.  It instead proposed that this employer duty be replaced by an obligation for the European Commission and Member States to foster AI literacy and “encourage” providers and deployers of AI systems to take measures to ensure a sufficient level of AI literacy.

While the Council agreed with this amendment, the Parliament (following the EDPB and EDPS recommendations) has proposed reinstating the original AI literacy obligation imposed on providers and deployers, though it lowers the standard from ensuring “a sufficient level of AI literacy” to “support[ing] the improvement of AI literacy”.  In addition, the Parliament adds a requirement for the European Commission to issue implementation guidance and to support AI literacy in society (eg, through the creation of public-private partnerships).

EU database registration requirements

The Proposal sought to remove the requirement for providers of AI systems referred to in Article 6(3) (ie, AI systems used in high-risk areas listed in Annex III, but which the provider has concluded do not pose a significant risk of harm to individuals’ health, safety or fundamental rights) to register such systems in the EU database.  In such circumstances, it was considered by the European Commission that registration would cause a disproportionate compliance burden.

Both the Parliament and the Council, aligning with the EDPB and EDPS recommendations, have rejected this proposal and reinstated this registration requirement.  However, they have agreed to simplify the registration requirement by streamlining the information to be submitted under Section B of Annex VIII of the AI Act.  While it remains crucial for effective market surveillance and public accountability that such AI systems are registered in the EU database, the EU legislative bodies recognise that the registration requirements should be simplified and made more proportionate. 

Processing of special categories of personal data for bias detection and correction

The Proposal introduced a new Article 4a to the AI Act, which extended the legal basis for processing special categories of personal data for the purposes of ensuring bias detection and correction to both providers and deployers of all AI systems and models (ie, beyond just providers of high-risk AI systems).  It also lowered the threshold for processing such data by “providers of high-risk AI systems” from where “strictly necessary” to where “necessary”, and introduced a “where necessary and proportionate” test for providers and deployers of other AI systems and models and deployers of high-risk AI systems.

Following the recommendations of the EDPB and EDPS, both the Council and Parliament have reinstated the AI Act’s original position that the processing of special categories of personal data by “providers of high-risk AI systems” must be “strictly necessary” to ensure bias detection and correction.

In addition, the Council only permits providers and deployers of other AI systems and models and deployers of high-risk AI systems to “exceptionally” process special categories of personal data where such processing is “strictly necessary” to ensure bias detection and correction, in view of the possible biases that are likely to affect health and safety, fundamental rights, or lead to discrimination prohibited under EU law.  The Parliament also permits processing of such data on an exceptional basis, however, it has proposed the “strictly necessary” threshold be lowered to where the processing is “necessary” for such providers and deployers.  Both EU legislative bodies clarify that this provision does not create any positive obligation to conduct bias detection and correction using special categories of personal data.

Small mid-cap enterprises

The Parliament notes in Recital 4 that 99.8% of all EU companies are small and medium-sized enterprises, the majority of which are microenterprises and small enterprises.  As such, both the Council and Parliament support the Proposal’s extension of regulatory privileges afforded to small and medium-sized enterprises (“SMEs”) in Article 99 of the AI Act to small mid-cap enterprises (“SMCs”).  This amendment is intended to reduce the penalties which may be imposed on SMCs and facilitate the scale-up of SMEs to SMCs.

AI regulatory sandboxes

The Council has proposed an amendment to Article 57(1) of the AI Act, postponing the deadline for national competent authorities to establish at least one AI regulatory sandbox from 2 August 2026 to 2 December 2027.  This extension has not, however, been suggested by the Parliament.

What’s next?

The trilogue negotiations between the Parliament, Council and European Commission are expected to begin in April 2026.  Once concluded, the agreed text will be published in the Official Journal of the European Union (likely before the end of July 2026).

While finalising the text of the Proposal before 2 August 2026 is an ambitious goal, the fact that the Parliament and Council appear to agree in principle on a substantive number of amendments is a positive sign that agreement will be reached by this deadline.  In the event that the trilogue negotiations are not completed on time, then the original provisions of the AI Act, including the high-risk obligations timelines, will apply from 2 August 2026.  Businesses in scope of the AI Act should remain alert to the ongoing negotiations, and proceed with caution until the final text is agreed.

We will continue to monitor the developments in this space and will issue further updates as they become available.

Contact Us

For more information, please contact any member of our Technology and Innovation Group or your usual Matheson contact.

© 2026 Matheson LLP | All Rights Reserved