This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
Digital developments in focus
| 1 minute read

Auditing your AI - what does the ICO expect you to do?

The ICO has been focused for some time on developing an AI auditing framework and, on 19 February 2020, it published its draft ‘Guidance on the AI auditing framework’ for consultation. The guidance proposes several tools that organisations can use to assess the potential risks associated with providing or procuring AI systems, as well as measures to mitigate those risks. It also provides the ICO, as a key regulator in this area, with auditing tools and procedures that its investigation teams can use when assessing the compliance of organisations using AI.

The guidance focusses on issues raised by the GDPR principles, rather than general ethical or design principles, and is set out in four parts:

1. Accountability and governance - organisations are responsible for the AI systems they use, even where they are delivered by third parties. They must assess and mitigate risks, including understanding the trade-offs being made between privacy compliance and other competing interests. They must also document the decisions they have made, for example using a data protection impact assessment.

2. Lawfulness, fairness and transparency - this includes looking at the lawful bases for processing personal data using AI systems, assessing and improving AI system performance and mitigating potential bias.

3. Security and data minimisation - AI systems can exacerbate known security risks and make them more difficult as well as present challenges for the data minimisation principle. 

4. Enabling individual rights - the way AI systems are developed and deployed can make it harder to understand when and how individual rights apply. This includes looking at rights relating to automated decision-making.

Interestingly, this layout moves away from the ‘eight AI specific risk areas’ that the ICO blogged about in its informal consultation process prior to launching the draft guidance, although in practice most of the same main points are covered (for more information on the eight risk areas, see our article AI and Data Protection: Balancing Tensions).

What next?

The consultation is open until 1 April 2020, and a final version of the guidance is expected later this year.

However, organisations should not wait until the final version is released before taking action. Sufficient guidance is available now (both in the draft AI auditing framework and in other AI projects such as the ICO's Project ExplAIn guidance ) to understand the ICO's areas of particular concern and to assess compliance with those.

"This is the first piece of guidance published by the ICO that has a broad focus on the management of several different risks arising from AI systems as well as governance and accountability measures." (ICO)


ai, big data, data, data analytics, emerging tech