This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
THE LENS
Digital developments in focus
| 4 minute read

Generative AI – three golden rules

You’ve seen the headlines around generative AI (good and bad), but what three things should your organisation do if it is planning to use (or is already using) this ground breaking technology? 

In this blog I look at the need to understand a bit about the tech, its opportunities and its risks. 

1. Understand the tech

It is helpful to understand a little about the different types of model/product that are out there as these can create different opportunities and risks.

  • LLMs v multi modal models: Large language models (LLM) like Open AI’s ChatGPT3 and Google’s Bard are trained on large amounts of text-based data. This is often scraped from the internet (web pages, online content, social media posts etc.). The algorithms analyse the relationships between different words and turn that into a probability model. When the algorithm is asked a question (a prompt) it then answers based on the relationships of the words in its model. GPT-4, described by Open AI as its “latest milestone in scaling up deep learning”, is actually a large multi-modal model (rather than LLM), which means it can accept images as well as text inputs although its outputs are text based. That said, the image input capability is not yet widely available.
  • Enterprise tools: To-date, many people have been using the publicly available versions of this technology. However, generative AI will increasingly be built into enterprise tools. For example, LLMs like Bing and Bard are, or will soon be, incorporated into search engines and Microsoft Copilot will be integrated into apps such as Word, Powerpoint, Teams and Excel. Your organisation should therefore spend time assessing the suitability of these tools when they are made available to you through your IT vendors. For example, what data and apps will be accessed by them, and how much control will your organisation have over that use?
  • Public v Private: In the same way that public and private clouds have very different risk profiles, we anticipate there being a distinction between public access generative AI, and those accessed via a private tenancy (where a customer’s data remains encrypted and within an organisation’s boundaries). It is therefore important that you understand which type of generative AI product your organisation is proposing to use.

2. Understand the opportunities 

  • Realise the benefits: Do you know how generative AI can help your organisation, and how to get the best out of it? Our briefing, Generative AI: Practical Suggestions for Legal Teams, looks at some potential use cases (for example, using it to start learning about a specific topic or concept, or to reframe or summarise content) as well as providing tips on how you can phrase your requests/write good prompts to ensure you get the best output from the AI model.
  • Use the buzz to get buy-in to new projects and existing policies: Some organisations are reviewing whether specific policies are needed to manage the use of generative AI in their organisations. However, the hype around ChatGPT and AI more generally can also be used to help drive interest in existing digital projects and to remind employees of current tech capabilities and the internal rules around its use. For example, we would not expect an employee to use research from an unreliable source in an important presentation, whether that source is ChatGPT or Wikipedia, and internal training/policies help enforce this. Similarly, it is often important to ensure that employees do not put confidential, sensitive or personal information into public facing tech products. Again, this risk is not unique to generative AI models and existing tech policies are likely to already provide warnings around this. Which leads us nicely onto the risks….

3. Understand the risks

As with any new technology, it is important to understand the risks it creates and the ways in which these risks can be mitigated. When discussing this with clients we tend to group these into:

  • Input risks: Some AI models learn from the inputs they receive, and it is not always clear where that information goes, who has access to it and if it will be kept secure/confidential. Samsung recently banned the use of generative AI after employees uploaded sensitive code to ChatGPT, and the UK’s National Cyber Security Centre (NCSC) and data regulator (ICO) both warn against including sensitive and personal information in searches (see our blog). When you sign up to ChatGPT, it even tells you not to share any sensitive information in your conversations, and warns that conversations may be reviewed by its AI trainers (to improve its system).
  • Output risks: There are also a number of risks relating to the use of, or reliance on, the output of generative AI tools. ChatGPT itself warns that while it has safeguards in place “the system may occasionally generate incorrect or misleading information and produce offensive or biased content” and that it is not intended to give advice. It is known to make up (hallucinate) or omit information, despite the answer looking convincing and thorough. There are also risks the output could be out-of-date (as the "knowledge" cut-off for ChatGPT is September 2021) or infringe third party IP. We therefore recommend that the tools are best used as a collaborator/assistant, as the output will need to be human reviewed and carefully checked.
  • Regulatory risks: Regulators are becoming increasingly interested in generative AI. The Italian Data Regulator temporarily banned ChatGPT over privacy concerns, the UK’s data regulator published Generative AI: eight questions that developers and users need to ask, the NCSC provided security advice (see our blog), and the UK’s competition authority recently launched a review of the competition and consumer protection considerations around foundation models including LLMs and generative AI. Both the UK and EU are also looking to introduce new AI related laws or rules which will impact its use (see our blog). It is therefore important to monitor developments in this space, both to ensure that you are complying with any new rules / guidance and that you keep pace with risks which will evolve as the technology develops at pace.

We continue to track closely developments in the areas of Generative AI and its implementation by and for our clients. For more information on AI, see our Regulating AI hub and Series.

Our Client Innovation Network also offers a forum for in-house legal teams at our client organisations to connect and share ideas and experiences on innovation topics, including generative AI.

You’ve seen the headlines around generative AI (good and bad), but what three things should your organisation do if it is planning to use it?

Sign up to receive the latest insights. Click here to subscribe to The Lens Blog.