This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
THE LENS
Digital developments in focus
| 2 minutes read

New AI assurance roadmap published – is this the route to safe AI?

The UK government’s expert body for AI (the Centre for Data Ethics and Innovation or CDEI) has published “the world’s first roadmap to catalyse development of an AI assurance ecyosystem.” The roadmap was identified as a key action in the recent AI strategy (see my earlier blog) and as an important step in the government’s plans to establish “the most trusted and pro-innovation system for AI governance in the world.” 

The government recognises that trust is needed in AI systems to fully unlock the benefits AI can bring, and that tools and services that assure that AI systems work as they are supposed to (akin to auditing or kitemarking in other sectors) can help build this trust. Moreover, the UK is ideally placed to build on its strengths in the professional services and tech sectors to become a world leader in a new multi-billion pound assurance industry.

The roadmap identifies six priority areas for action:

  1. Generate demand for reliable and effective assurance across the AI supply chain.
  2. Build a dynamic, competitive AI assurance market, that provides a range of effective services and tools.
  3. Develop standards that provide a common language and scalable assessment techniques for AI assurance.
  4. Build an accountable AI assurance profession.
  5. Set out regulatory requirements that can be assured against.
  6. Improve links between industry and independent researchers, so that researchers can help develop assurance techniques and identify AI risks.

AI assurance is expected to become a useful tool for organisations to manage the risks around AI implementation, similar to the way assurance currently works for cyber security. The government’s aim is for the UK to have “a thriving and effective AI assurance ecosystem within the next 5 years” and its white paper on the governance and regulation of AI, expected next year, will highlight the role of assurance as both a market-based way to manage AI risk and as a complement to regulation. An example of this dual role can already be seen in the AI Auditing Framework currently being developed by the ICO which has three distinct outputs: (i) a set of tools and procedures for the ICO’s assurance and investigation teams to use when assessing the compliance of an organisation using AI; (ii) detailed guidance on AI and data protection for organisations; and (iii) an AI and data protection toolkit to support organisations auditing the compliance of their own AI systems.

While a mature AI assurance ecosystem may be a little way off, and more work is needed to develop (for example) commonly used AI standards and a common language and regulatory approach to AI assurance, there are steps organisations can take now (and guidance to follow) around AI adoption. It is therefore important if you are using AI in your organisation to utilise the tools and guidance that currently exist, while monitoring developments in this fast paced area.

Note: December has been a busy time for the CDEI. As well as publishing the main roadmap, and an extended version providing further detail on the role of AI assurance, the six priority areas, and follow up work of the CDEI in this area, it has also published blogs on Enabling trustworthy innovation by assuring AI systems and Helping recruiters to innovate responsibly with data-driven tools.

“In the National AI Strategy, we committed to establishing the most trusted and pro-innovation system for AI governance in the world and building an effective AI assurance ecosystem in the UK will be critical to realising this mission” Chris Philip Minister for Technology and the Digital Economy DCMS

Tags

ai