The opening statement from the ICO’s new AI guidance states that “the innovation, opportunities and potential value to society of AI will not need emphasising to anyone reading this guidance” – I presume the same can be said of this blog.

However, it has long been recognised that it can be difficult to balance the tensions that exist between some of the key characteristics of AI and data protection (particularly GDPR) compliance.

Rather encouragingly Elizabeth Denham’s foreword to the guidance confirms that “the underlying data protection questions for even the most complex AI project are much the same as with any new project. Is data being used fairly, lawfully and transparently? Do people understand how their data is being used and is it being kept secure?”

That said, there is a recognition that AI presents particular challenges when answering these questions, and that some aspects of the law (for example data minimisation and transparency) require “greater thought”. (Note: The ICO’s ‘thoughts’ about the latter can be found in its recent Explainability guidance).

The guidance contains recommendations on good practice for organisational and technical measures to mitigate AI risks. It does not provide ethical or design principles – rather it corresponds to the data protection principles:

  • Part 1 focusses on the AI-specific implications of accountability, including data protection impact assessment and controller/processor responsibilities;
  • Part 2 covers lawfulness, fairness and transparency in AI systems, which includes looking at how to mitigate potential discrimination to ensure fair processing;
  • Part 3 covers security and data minimisation – examining the new risks and challenges raised by AI in these areas; and
  • Part 4 cover compliance with individual rights, including rights relating to solely automated decisions and how to ensure meaningful human input or (for solely automated decisions) review?

It forms part of the ICO’s wider AI Auditing framework (which also includes auditing tools and procedures for the ICO to use) and its headline takeaway is to consider data protection at an early stage. Mitigation of risk must come at the design stage as retro-fitting compliance rarely leads to 'comfortable compliance or practical products.'

Comment

The ICO has been working hard over the last few years to increase its knowledge and auditing capabilities around AI, and to produce practical guidance that helps organisations when adopting and developing AI solutions. This dates back to its original Big Data, AI and Machine Learning report (published in 2014, updated in 2017 and still relevant today – this latest guidance is expressly stated to complement it and the ICO's new Explainability guidance, although the ICO acknowledges it now has additional insights to those presented in 2017). In developing this latest guidance, the ICO has also published a series of informal consultation blogs, and a formal consultation draft. However, recognising that AI is in its early stages and is developing rapidly, this latest publication is still described as ‘foundational guidance’. The ICO acknowledges that it will need to continue to offer new tools to promote privacy by design in AI (a toolkit to provide further practical support to organisations auditing the compliance of their own AI systems is, apparently, ‘forthcoming’) and to continue to update this guidance to ensure it keeps relevant.