On 1st May, the ICO published it strategic approach to regulating AI. This was in response to a government request asking key regulators to set out their approach to AI regulation by 30th April.
Background to government request
The UK’s approach to AI regulation, set out in its 2023 white paper (see blog), relies on sector regulators implementing a principle based framework, supported by centralised funding, guidance and resources. This includes regulators having regard to five AI principles, around:
- Safety, security, robustness
- Appropriate transparency and explainability
- Fairness
- Accountability and governance
- Contestability and redress.
As part of this sector specific approach, ministers wrote to key sectoral and cross-economy regulators in February, asking them to publish (by 30th April) how they are taking forward the White Paper proposals and the steps they are taking to develop their strategic approaches to AI. These regulators, including the ICO (see below), FCA (see blog) and CMA (see blog) have now all published their plans, and the government has published a centralised resource which lists them all, available here.
ICO approach
The ICO’s response, entitled “Regulating AI: The ICO’s strategic approach” sets out:
- Opportunities and risks around AI: Recognising the undeniable potential of AI to transform lives, the ICO acknowledges that there are legitimate concerns around issues such as fairness and bias; transparency and explainability; safety and security; and accountability and redress. The development and deployment of AI often involves the processing of personal data (bringing it into the ICO’s remit) and the way AI works (e.g. its autonomy, adaptivity and scaling) means it can exacerbate known risks as well as create new ones. The ICO lists a number of areas of particular focus, including foundation models, high-risk AI applications (in areas such as education, healthcare, recruitment and financial services), facial recognition and biometrics, and children and AI.
- The role of data protection law: The ICO explains how the existing statutory principles under the GDPR map onto, and overlap with, the proposed AI Regulation White Paper principles mentioned above. It notes that the ICO is therefore already experienced in implementing the aims and objectives of the white paper principles, and that the government’s voluntary guidance clarifies that it is not seeking to duplicate, replace or contradict regulators’ existing statutory definitions or principles.
- The ICO’s work on AI: The ICO has said for some time now that AI is not new. It has been producing AI specific guidance for over a decade now, and has already taken a number of AI related enforcement actions (for example against Clearview and Serco Leisure). In its strategy paper, the ICO sets out the range of guidance and products/services available. Examples include its AI risk toolkit, advisory service and regulatory sandbox. It also references guidance on specific applications of AI, for example in relation to biometric recognition and age assurance technologies.
- Upcoming developments: The ICO lists some key developments that organisations can expect in the coming months. These include publication of the next stages in its consultation series on generative AI (see our blog) and a new consultation on biometric classification. It will also, in Spring 2025, consult on updates to its Guidance on AI and Data Protection and Automated Decision Making and Profiling guidance, to reflect changes expected to be enacted in the Data Protection and Digital Information Bill.
- The ICO’s work with other regulators: Finally, the ICO works with a number of other regulators in relation to AI, including as part of the Digital Regulation Cooperation Forum (whose other members are OFCOM, the FCA and CMA). It has a number of DRCF related activities planned, including hosting joint workshops to explore how the AI principles interact across the different regulatory regimes. The workshops will have a focus on transparency and accountability. The ICO also works with other key stakeholders, including international partners and standards bodies and has, for example, input into standards such as ISO/IEC 42001:2023 (on AI Management System) and ISO/IEC 23894:2023 (on AI Risk Management).