The EU’s groundbreaking AI proposals (see our Lens Blog), published 21 April 2021, have been broadly welcomed by the EU’s top data protection bodies (the EDPB and EDPS). However, they do have concerns, including around biometric surveillance, as set out in their recent joint statement

The interplay between the EU’s current data protection laws and proposed AI laws cannot be ignored. As the joint statement says “the [AI] proposal has prominently important data protection implications”. It is therefore unsurprising that the EDPB/EDPS welcome the proposals’ aim of addressing the use of AI in the EU. That said, they do have some concerns. For example:

- While they welcome the fact that the scope of the AI proposal extends to the provision and use of AI systems by EU institutions and agencies, the exclusion of international law enforcement cooperation from this scope raises serious concerns. It creates, in their view, a significant risk of circumvention (e.g., third countries or international organisations operating high-risk applications relied on by public authorities in the EU).

- The proposal adopts a risk-based approach, which they do again welcome. However, the approach requires clarification and the concept of “risk to fundamental rights” used in the proposal should be aligned with the GDPR.

- The proposal should expressly clarify that EU data protection law applies to any processing of personal data within scope of the new AI rules.

- The EDPB/EDPS would like more AI systems banned under the plans. For example, while the proposal prohibits social scoring “over a certain period of time” or by public authorities (or on their behalf), this should be extended to any type of social scoring, given that private companies (social media companies, cloud service providers etc.) also process vast amounts of personal data and conduct social scoring. They also call for a ban on any use of AI for an automated recognition of human features in publicly accessible spaces. As well as facial recognition, this would cover things such as gait, fingerprints, DNA, voice, keystrokes and other biometric or behavioural signals. The current proposal only contains a partial ban in practice.

- Data protection authorities should be designated as national supervisory authorities to ensure a more harmonised regulatory approach. They should also be involved in the preparation and establishment of harmonized standards and common specifications.

It is unsurprising that the EU’s data watchdog has called for more alignment with the data protection rules, and a tightening of the rules around biometric surveillance and other privacy intrusive uses of AI. The latter, for example, is an area which is also concerning the ICO in the UK, who recently published an opinion and blog on the use of facial recognition in public places. However, whether these concerns are addressed in future drafts of the AI rules will depend on the success of lobbying groups on both sides.