For some time it has been acknowledged that the potential benefits of implementing AI systems are accompanied with many risks. When adopting AI, it is important for organisations to understand, quantify and mitigate these risks and to ensure that they are consistent with their overall appetite for risk. However, AI is a rapidly developing area, in terms of technology, operational implementation, law and regulation, and this raises challenges for organisations’ traditional risk governance frameworks. In this context we welcome the consultation which has been launched by the Information Commissioner’s Office (ICO) in relation to its new AI and data protection risk mitigation and management toolkit.
Intended to “assist risk practitioners identify and mitigate the data protection risks AI systems create or exacerbate”, it sees the ICO fulfil its promise in the AI and data protection guidance “to provide further practical support to organisations auditing the compliance of their own AI systems”. The toolkit represents the latest move by the ICO with regards to its increasing focus on AI monitoring; earlier this year a separate toolkit concerning the use of data analytics on personal data was launched (see out Lens post, here).
How does the toolkit work?
The toolkit acts largely as a template for organisations when assessing their internal AI risk. It:
- identifies a number of risks and explains how AI can create or exacerbate these risks. It then provides space for organisations to detail: (i) the current status of each risk; (ii) the actions it intends to take to mitigate that risk; and (iii) who is responsible for undertaking each action;
- offers practical (albeit high level) suggestions as to how to mitigate each risk. These recommendations are by no means exhaustive and largely focus on the implementation of specifically designed strategies as well as encouraging the documentation of actions and decisions taken by the organisation. The ICO makes clear that it is not mandatory to implement each proposal and reminds organisations that they are ultimately responsible for addressing each risk; and
- allows users to identify those risks which are most prevalent in their organisation through a risk score calculator. Whilst the toolkit automatically calculates a final risk value, based on inputted values for probability and severity, the ICO has not provided guidance explaining what these scores means in practice. It is therefore left to organisations to differentiate between the risk scores and to determine how to adapt their approach.
The toolkit is separated into 13 key areas (including governance, transparency, data mitigation and individual rights), which allows organisation to easily identify and tackle specific risk areas. Although these categories do not follow the four part structure of the recent AI guidance, they broadly correspond to those risks identified in it. However, the toolkit by no means covers all AI considerations and, somewhat disappointingly, will not remove the need to read through the growing number of ICO advice. It is perhaps unfair to say this is a shortcoming of the toolkit; it is more a reflection of the complex nature of balancing AI development with data protection compliance (although it may be helpful if it was easier to track themes/risks across the various AI tools and guidance). Nevertheless, organisations should welcome this important step in the transformation of ICO guidance into practical tools which can be used to reflect on the risks AI pose and to plan the next steps in risk mitigation.
Organisations have until 19 April 2021 to respond to the consultation, and a revised toolkit is expected this summer. The consultation, at heart, rests on the question of “how likely it is that organisations will use the toolkit to assess risks of non-compliance?” Although it is clear that the ICO hopes that all organisations will deploy this toolkit, users are offered the flexibility to adopt it in relation to each AI system that processes personal data or to more generally asses their overall AI related processes. The extent to which use of this tool will be made compulsory therefore remains to be seen.
In addition, the ICO is requesting feedback on developing a single page overview which can be provided to senior management to enable them to make quick and informed decisions regarding AI risk mitigation. It is hoped that this should help facilitate the engagement of senior management, who are responsible for the oversight of organisational risk, and serves as reminder that AI cannot be delegated solely to the data scientists.
“The toolkit is designed to assist risk practitioners identify and mitigate the data protection risks AI systems create or exacerbate. It will also help developers think about the risks of non-compliance with data protection law.”