This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
THE LENS
Digital developments in focus
| 1 minute read

Talking AI with the experts - Faculty explain compliance through a technical lens

We know that AI raises a wide range of legal risks, and that you need to understand enough about how the technology works to fully appreciate and assess those risks, but how do you gain that understanding from a trusted source when faced with a sea of varied information online? We speak to the experts – in this case Faculty, one of the world’s leading providers of human-first AI solutions.

We have recorded a podcast with Rupert Everett (Faculty’s GC) and Dr Kat James (the Technical Director of their Retail Consumer Team) where we discuss a range of topics, including:

  • What good AI regulation looks like – both from the perspective of a lawyer at an AI company, and from a technical perspective, and whether the regulatory guardrails coming down the line could help or hinder innovation. In Kat’s view, regulation per se does not hinder innovation (her background in the highly regulated area of genetics is proof of that) but she is less confident that it will help with AI adoption as is hoped. Rupert also highlighted the importance of consistency, certainty and clarity when it comes to AI regulation and discussed how standards will be a key tool in terms of providing clarity for compliance. 
     
  • How you can technically manage regulatory concerns in areas such as transparency, fairness, bias and explainability. Unlike the other areas, explainability in AI is a mathematical concept. Kat discusses how the ICO mention this in their guidance, using the term interpretability, which she sees being used interchangeably with explainability.  
     
  • Whether training data ‘stays in’ the AI model, which is relevant from an IP and privacy perspective. The short answer is (typically) no, although caution should still be exercised - particularly if, in practice, the output appears to effectively recreate that input data. 
     
  • Predictions for the coming year. As well as discussing how AI is racing through the typical hype cycle, their predictions included:
    • the rise of image generation models - which may become available in products in the way we have seen ChatGPT become available in the Microsoft suite. They may also create new opportunities and challenges around deepfakes;
    • a settling of the vendor landscape and the adoption of other models and and providers (e.g. AWS); 
    • challenges around governance controls for foundation models and the risks posed by bad actors; and 
    • how to manage ideological homogenisation - where a small group of people are building the most powerful models but their views may not be representative of enough diverse groups. 

For more Slaughter and May AI content, see our AI Lens blogs, or visit our Regulating AI hub

Tags

ai, digital regulation