This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
THE LENS
Digital developments in focus
| 1 minute read

How do you explain AI?

This week the ICO, together with the Alan Turing Institute (The Turing), published draft guidance aimed at helping organisations explain decisions made with AI to affected individuals. Its research has shown that people are worried about machines making decisions about them, and expect to receive explanations of AI decisions in contexts where a human making a similar decision would give an explanation.

Background to the guidance

Both the Government's AI sector deal and an independent review on growing the AI industry in the UK (the Hall Pesenti review) had called for the ICO and the Turing (the UK's national institute for data science and AI) to work together to develop a framework/guidance to assist in explaining AI decisions.

To enable them to do this, the ICO and the Turing conducted public and industry research, publishing their findings in an interim report this June.

What does this guidance cover?

The draft guidance is in three parts:

  1. Part 1 covers the basics of explaining AI, looking at some key terms and concepts and the basic legal framework.
  2. Part 2 looks at explaining AI in practice, and is aimed more at technical teams (although DPOs and compliance teams should also find it useful).
  3. Part 3 focuses on what explaining AI means for your organisation. It goes into the various roles, policies, procedures and documentation that you can put in place to ensure your organisation is set up to provide meaningful explanations to affected individuals. While this is primarily targeted at your organisation’s senior management team, it may be useful for your DPO and compliance team.

Key principles from the guidance 

The guidance lays out four key principles which organisations must consider when developing AI decision-making systems. These are “rooted within” the GDPR and are:

  1. Be transparent: make your use of AI for decision-making obvious and appropriately explain the decisions you make to individuals in a meaningful way.
  2. Be accountable: ensure appropriate oversight of your AI decision systems, and be answerable to others.
  3. Consider context: there is no one-size-fits-all approach to explaining AI-assisted decisions.
  4. Reflect on impacts: ask and answer questions about the ethical purposes and objectives of your AI project at the initial stages of formulating the problem and defining the outcome.

Next steps

The draft guidance is out for consultation until 24 January 2020, and the final version is expected later in the year.

The potential for AI is huge, but its implementation is often complex, which makes it difficult for people to understand how it works. And when people don’t understand a technology, it can lead to doubt, uncertainty and mistrust.

Tags

ai, data protection