This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
THE LENS
Digital developments in focus
| 1 minute read

How do you explain AI? Final guidance now published

As AI solutions become more prevalent, customers and regulators are demanding increased information about their use, particularly where they are used to make decisions affecting individuals. But how should you go about explaining your AI use?

Where your AI solution processes personal data (when training and/or deploying the solution) then a good place to start would be the ICO’s new guidance ‘Project ExplAIn - Explaining AI decisions made with AI’. Draft guidance was published for consultation last December (see our previous blog), and the final guidance was published this May. It includes a number of tasks, checklists and guidance notes (for example around the different explanation types that need to be considered) for organisations to follow. 

Although some changes were made between the consultation and final draft, for example a ‘seven step’ approach included in the consultation was replaced by ‘six tasks’ for organisations to undertake, much of the content remains, as does the three part format. Part one of the guidance is intended as a general overview for all stakeholders, while part two looks at how the guidance can be applied in practice and is aimed more at technical teams. Part three looks at what explaining AI means for an organisation and is aimed at senior executives. However, all three parts will also be of interest to compliance teams, data protection officers and risk advisors. 

Our client publication - Explaining AI: The importance of transparency and explainability – provides a more detailed examination of the guidance.  

However, this is not the only relevant guidance in this space. For example, the ICO produced its explainability guidance in collaboration with the Alan Turing Institute (‘The Turing’), and the Turing is currently working on an AI project with the FCA which (amongst other things) will look at transparency and explainability. The ICO’s AI Auditing Framework (see our previous blog) also includes guidance on transparency and explainability. It says “While the ExplAIn guidance already covers the challenge of AI explainability for individuals in substantial detail, this guidance includes some additional considerations about AI explainability within the organisation, e.g. for internal oversight and compliance. The two pieces of guidance are complementary, and we recommend reading both in tandem.” 

"Increasingly, organisations are using AI to support, or make decisions about individuals. If this is something you do, or something you are thinking about, this guidance is for you." (ICO/Turing guidance "Explaining AI decisions made with AI")

Sign up to receive the latest insights. Click here to subscribe to The Lens Blog.

Tags

ai, data, emerging tech