This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
THE LENS
Digital developments in focus
| 2 minutes read

Studies on AI adoption and checklists for AI trustworthiness – the EU gets busy on AI

The EU has been busy this month progressing its AI agenda. A new EU study published today confirms that AI use across the EU is on the rise, and highlights some of the key barriers (both internal and external) to AI adoption. The Commission also published a checklist for organisations to assess the trustworthiness of their AI systems.

Survey on AI use

The European enterprise survey on the use of technologies based on AI, published today by the European Commission, confirmed that nearly half (42%) of enterprises have adopted at least one AI technology, while a quarter use at least two types of AI. In addition, 18% have plans to adopt AI in the next two years.

However, barriers to adoption still exist. The study highlights difficulties in hiring staff with the right skill set, high costs of adoption and the costs of adapting operational processes as the top three internal barriers enterprises face. Liability, data standardisation and regulatory issues were cited as major external challenges to AI adoption as was a lack of citizens' trust. While the EU intends to use the study to monitor AI adoption across member states and help shape future AI initiatives, it is already active in addressing some of the challenges identified. For example, while citizens' trust was listed as a barrier, earlier this month the Commission published a checklist on assessing trustworthy AI.

AI Checklist 

The EU’s checklist for trustworthy AI, published on 17 July, aims to enable organisations to self-asses the trustworthiness of their AI systems. It builds on the Ethics Guidelines on Trustworthy AI published by the Commission’s High-Level Expert group on AI, and translates the seven principles from the guidelines into a detailed assessment list. It also confirms that a ‘fundamental rights impact assessment’ should be carried out prior to self-assessing an AI system and includes suggested questions this assessment could entail (for example around potential discrimination in the system ). The list is designed to be completed by a multidisciplinary team of people (both internal and external) ranging from AI designers and data scientists to legal/compliance advisors and management, and is available both in word format and through an interactive online portal. 

While the checklist is a useful tool to help organisations assess trustworthiness in their AI systems , it joins a growing list of of guidance which touches on the same or similar issues (see for example the ICO's work around AI explainability and the FCA's work on transparency) . Going forward, organisations will therefore have to consider which guidance to follow, and how to consolidate the guidance from different regulators and bodies into their compliance processes.    

Tags

ai