The European Data Protection Board (“EDPB”) and the European Data Protection Supervisor (“EDPS”) recently adopted a Joint Opinion 1/2026 (the “Opinion”) on the draft AI Digital Omnibus Regulation (2025/0359/EU) (the “Proposal”) (previously discussed here).
The Opinion addresses the most relevant aspects of the Proposal which are of particular importance for the protection of individuals’ rights and freedoms with regard to the processing of personal data. While the EDPB and EDPS generally support the Proposal, the Opinion highlights how some of the proposed simplification measures risk undermining the protection of fundamental rights and introducing legal uncertainty.
This article provides an overview of the key concerns raised, and recommendations provided, by the EDPB and EDPS in their Opinion.
Processing of special categories of personal data for bias detection and correction
The Proposal introduces a new Article 4a, which extends the legal basis for processing special categories of personal data for the purposes of ensuring bias detection and correction. The extension applies not only to high-risk AI systems, but to all AI systems and models, and now encompasses not merely providers but also deployers.
The EDPB and EDPS, in principle, support the proposed extension of this legal basis, allowing the exceptional processing of special categories of personal data for the purpose of bias detection and correction. At the same time, to avoid potential abuse, they opine that the cases where providers and deployers would be able to rely on this legal basis in the context of non-high risk AI systems and models should be clearly circumscribed.
In addition, the EDPB and EDPS recommended the following:
- reinstating the requirement for such processing to be carried out only where it is “strictly necessary” (rather than the more relaxed standard of where “necessary and proportionate” in the Proposal);
- clearly circumscribing the application of this legal ground to cases where the risk of adverse effects caused by such bias is sufficiently serious;
- providing further clarity in the recitals as to the justification to extend the scope of this legal basis with specific examples of non-high risk systems warranting reliance on this legal basis; and
- revising the Proposal to enhance legal certainty in regard to the application of Articles 6 and 9 GDPR.
Registration and documentation
The Proposal seeks to delete the requirement for providers to register high-risk AI systems in circumstances where the providers have concluded that their Annex III AI systems are not high-risk under Article 6(3) of the AI Act.
While the EDPB and EDPS support the general aim of the Proposal to ease the administrative burden on providers to register AI systems in the EU database for high-risk systems, they assert that the proposed deletion of this requirement would significantly decrease the accountability of providers of AI systems and would provide an undesirable incentive for providers to unduly invoke this exemption.
Extension of some SME privileges to SMCs
The Proposal seeks to extend existing regulatory privileges afforded to small and medium-sized enterprises (“SMEs”) in Article 99 of the AI Act to small mid-cap enterprises (“SMCs”). This would mean that smaller and more proportionate penalties may be imposed on SMCs. The EDPB and EDPS have raised concerns with this proposal on the grounds that headcount and company size do not appropriately correlate to the potential harm posed by high-risk AI systems made available by such entities.
AI regulatory sandboxes at EU level
The EDPB and EDPS welcome the proposed introduction of AI regulatory sandboxes at EU-level (in addition those at national level), but highlights a number of gaps that need to be addressed to ensure better legal certainty. These gaps include:
- unlike national sandboxes under Article 57(1) of the AI Act, there is no provision for competent Data Protection Authorities (“DPAs”) to be involved in the operation and supervision of EU-level sandboxes;
- the competence of the DPAs in these EU-level sandboxes and the interplay with the GDPR cooperation mechanism should be clarified;
- the EDPB should (i) have an advisory role with respect to such EU-level sandboxes to ensure consistency on data protection aspects, especially in circumstances where multiple DPAs would be concerned by the sandbox; and (ii) be granted the status of observer at the European Artificial Intelligence Board (“AI Board”) to ensure oversight of data protection-related matters; and
- a clear distinction should be made as between the AI sandboxes for EU institutions and bodies which may be established by the EDPS under Article 57(3) of the AI Act, and the EU-level AI sandbox established by the AI Office under Article 57(3a) of the AI Act.
Supervision and enforcement by the EU AI Office
Pursuant to Article 75 of the AI Act, the AI Office is competent to monitor and supervise compliance of AI systems based on a general purpose AI model, when the model and the system are developed by the same provider. The Proposal seeks to extend this exclusive competence to AI systems that constitute or are integrated into a designated very large online platform (“VLOP”) or a designated very large online search engine (“VLOSE”), within the meaning of the Digital Services Act (Regulation (EU) 2022/2065).
While the EDPB and EDPS acknowledge the benefits to centralising competence over these systems with the AI Office, they flag certain concerns. In particular, they question whether the requirement for “active cooperation” between the AI Office and national competent authorities protects the capacity of the latter to act independently. For example, it is unclear whether this requirement for “active cooperation” would guarantee the ability of national competent authorities to initiate actions if the AI Office has not already acted or does not want to, given the exclusivity of the authority granted to the AI Office. In addition, the AI Office should also cooperate with national DPAs where the VLOP / VLOSE AI systems may pose risks to the fundamental rights to privacy and data protection.
Furthermore, the EDPB and EDPS recommend the following clarifications are made within the body of the AI Act:
- clear delimitation regarding the types of general-purpose AI models that will trigger exclusive AI Office competence to ensure effective supervision of AI systems; and
- that, in accordance with Recital 14 of the Proposal, the EDPS rather than the AI Office would exercise exclusive competence in respect of AI systems placed on the market or used by EU institutions / bodies and covered by Article 74(9) of the AI Act.
Cooperation between authorities protecting fundamental rights and MSAs
The Proposal seeks to streamline cooperation between authorities that protect fundamental rights (“FRABs”) and market surveillance authorities (“MSAs”). The EDPB and EDPS support this proposal but recommend that the authorities’ respective roles are clarified. In particular, the Opinion recommends:
- clarifying the competence and the role of the MSAs as the point of contact for the execution and transmission of requests to providers and deployers;
- ensuring that the Proposal does not affect the independence and existing powers of DPAs;
- requiring that the MSAs provide information to FRABs without undue delay; and
- clarify the new obligation of cooperation and mutual assistance between MSAs and FRABs in cases of cross-border mutual assistance for market surveillance and product compliance.
AI Literacy
The Proposal seeks to remove the duty on providers and deployers of AI systems to ensure a sufficient level of AI literacy of their staff and others, such as contractors, operating AI systems on their behalf. It is proposed that this employer duty should be replaced by an obligation for the European Commission and Member States to instead foster AI literacy and “encourage” providers and deployers of AI systems to take measures to ensure a sufficient level of AI literacy.
The EDPB and EDPS consider that providers and deployers of AI systems should not be released from their obligation to ensure that their staff have a sufficient level of AI literacy, as it would undermine the objective of ensuring appropriate knowledge and skills across the AI-cycle to protect fundamental rights and support AI compliance. To that end, the Opinion recommends that the employer duty in respect of responsibility for AI literacy is retained, and that any new obligation for the European Commission of Member States to foster AI literacy complements, rather than replaces the responsibilities of organisations actually developing and deploying AI systems.
Implementation timeline for high-risk and transparency rules
The Proposal seeks to extend the timeline for obligations of providers and deployers of high-risk AI systems coming into force by a maximum of 16 months, to align with the availability of support tools including harmonised standards, common specifications and EU Commission guidelines, and the designation of national competent authorities. A similar timeline change is proposed in respect of the transparency obligations in Article 50(2) of the AI Act, for those who have already placed their systems on the market before 2 August 2026. It is proposed that they should benefit from a six-month grace period to integrate technically robust water-marking tools (ie, until 2 February 2027) into their practices, without disrupting the market.
The EDPB and EDPS acknowledge the implementation challenges faced by stakeholders as a result of delays in designating national competent authorities and lack of harmonised standards and guidelines for AI compliance. However, the Opinion notes that any extended timeline would increase the number of high-risk AI systems which could be put on the market without being subject to the AI Act, due to the ‘legacy systems’ exception in Article 111(2) of the AI Act. That provision excludes high-risk AI systems already placed on the EU market from the scope of the AI Act unless they are subject to significant changes in their design. According to the Proposal, the cut-off date for such systems would be changed from 2 August 2026 to 2 December 2027. In addition, the Opinion notes that the extended timeline risks undermining the protection of fundamental rights in a fast-evolving AI landscape and leading to legal uncertainty. In this regard, the EDPB and EDPS encourage co-legislators to maintain the current timeline for certain obligations, such as the transparency requirements, and to minimise any delays to the extent possible.
Commentary
It remains to be seen whether the warnings of the EDPB and EDPS that certain measures put forward by the Proposal, while easing the administrative burdens, may undermine the fundamental rights of AI users and lead to legal uncertainty, will be taken into consideration by the EU co-legislators. The Proposal will be subject to intensive negotiations over the coming months.
Given the current uncertainty in regard to which proposals will be agreed at EU level, and whether they will be agreed by 2 August 2026, when most of the AI Act’s obligations are due to take full effect, it would be prudent for organisations to continue with their compliance efforts having regard to the original text of the AI Act. This will help organisations to ensure that, in the event certain amendments are not adopted, they are in compliance from day one.
We will continue to monitor the developments in this space and will issue further updates as they become available. In the meantime, it is worth noting that the Irish Government have published the General Scheme of the Regulation of Artificial Intelligence Bill. This legislation is necessary to provide for national supervision and enforcement of the obligations set out in the AI Act (discussed further here).
Contact Us
For more information, please contact any member of our Technology and Innovation Group or your usual Matheson contact.
