This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
THE LENS
Digital developments in focus
| 2 minute read

NIST looks to shine a light on the ‘black box’ with its white paper on explainable AI

There has been a growing focus over recent months, at governmental and regulatory levels, on the transparency and trustworthiness of AI solutions (see our previous blog). This is perhaps unsurprising given that AI is now widely used in making high-stake decisions (e.g. medical diagnostics and criminal risk assessments), and some level of AI transparency is necessary to meet regulatory standards (e.g. under the GDPR in the UK and EU and the Fair Credit Reporting Act in the US). Out of this drive towards greater transparency has emerged a global assortment of guidance documents on so-called explainable AI.

The latest of these is the US National Institute of Standards and Technology’s (NIST) draft white paper, ‘Four Principles of Explainable Artificial Intelligence’. It was published on 18 August 2020 and identifies four principles underpinning the core concepts of explainable AI by which we can judge how explainable AI’s decisions are made. The principles are:

 - Explanation

: AI systems should deliver accompanying evidence or reasons for all outputs.

 - Meaningful

: the recipient needs to understand the system’s explanation. This is not a one-size-fits-all principle, as the meaningfulness of an explanation will be influenced by a combination of factors, including the type of user group receiving the communication (e.g. developers vs. end-users of a system) and the person’s prior knowledge, experiences and mental processes (which will likely change over time).

 - Explanation Accuracyhow it arrived at its conclusion). This principle is not concerned with whether or not the system’s judgment is correct. Like the ‘meaningful’ principle, this is a contextual requirement, and so there will be different accuracy metrics for different user groups and individuals.

: the explanation must correctly reflect the system’s process for generating its output (i.e. accurately explain

 - Knowledge Limits

: the AI system must not provide an output to the user when it is operating in conditions that it was not designed or approved to operate in, or where the system has insufficient confidence in its decision. This seeks to avoid misleading, dangerous or unjust outputs.

The authors also present five broad categories of explanation (‘user benefit’, ‘societal acceptance’, ‘regulatory and compliance’, ‘system development’ and ‘owner benefit’). In addition they provide an overview of relevant explainable AI theories in the literature, and summarise the algorithms in the field that cover the major classes of explainable algorithms. NIST ends its report by exploring the explainability of human decision making, including the possibility of using explanations provided by people as a baseline comparison to provide insights to the challenges of designing explainable AI systems.

NIST will be calling for comments on its draft until October 15 2020.

Comment

NIST’s white paper stands in contrast to the ICO’s recent guidance ‘Project ExplAIn - Explaining AI decisions made with AI’ (see our client briefing). While the ICO aims to offer practical guidance for organisations, NIST made it clear on publication that its draft report is intended to be a discussion paper that will help to “stimulate the conversation about what we should expect of our decision-making devices”, and is not trying to offer answers to the many questions and challenges presented by explainable AI. One such challenge remains the fact that the principles of explainable AI will mean different things to different users in different contexts, and as such, a gap still exists between the principles as concepts and the effective implementation of these principles in practice. NIST’s paper, and the conversations it will likely yield, represent a valuable step towards closing this gap.

AI must be explainable to society to enable understanding, trust, and adoption of new AI technologies, the decisions produced, or guidance provided by AI systems.

Sign up to receive the latest insights. Click here to subscribe to The Lens Blog.

Tags

ai, emerging tech, data