Public interest in generative AI has reached a fever pitch since the release of GPT-4, the latest edition of OpenAI’s large language model, or LLM. Meanwhile, investors have over the past year been allocating an unprecedented amount of capital into AI and AI-adjacent companies. On 29 March, the UK Government unveiled its new white paper entitled A pro-innovation approach to AI regulation (the “AI White Paper”), expressing its desire to make the UK “the best place to create and build innovative AI companies”, and you can see our colleague Nat Donovan’s blog on that here.
Although there is as yet no market standard for investing in the latest iterations of AI technology, there are some key things that potential investors will need to focus on from a legal perspective when approaching this brave new technological world. While these are good practice in any investment scenario, they are particularly crucial when investing in AI:
- Due diligence: AI carries unique risks, particularly in relation to IP, data protection and privacy. The training sets being used for the latest LLMs are often opaque, and accusations of plagiarism and data misuse are emerging (with Italy’s data protection regulator going as far as to block ChatGPT while it investigates potential breaches). Meanwhile, there are suggestions of generative AI embedding bias and spreading harmful content. It remains to be seen whether these allegations are in fact founded, but clearly when investing in an AI business, it will be important to explore these risks and seek clarity as to how they are being meaningfully addressed or mitigated.
- Contractual protections: the warranty suite in any investment documentation should be crafted with the above risks in mind. As well as the usual warranties around compliance with laws and regulation, consider including AI-specific warranties (for example, relating to data collection, or compliance with ethics standards), and requiring the company to take specific actions designed to mitigate these potential risks, including implementing frameworks, undertaking regular impact assessments, and obtaining specialist advice if required. These will obviously provide limited protection in earlier stage investments, where claiming against the company (or its founders) is unlikely to be attractive, but it will focus minds on identifying and addressing these risks.
- Regulatory approvals: it is important to be aware that AI and AI-adjacent sectors (such as advanced robotics, computing hardware and dual-used technologies) are typically a focus of foreign direct investment or national security regimes across Europe and more broadly. In the UK, mandatory notification to the UK Government under the National Security and Investment Act 2021 (NSIA) is likely and approval may well be needed. For more on the NSIA and AI regulation please see our colleague Lisa Wright’s article here which is part of our Regulating AI series.
- Latest developments: when investing in an AI business, it will be particularly important to be on top of latest developments in the space in real time. Increasing amounts of AI-specific regulation are undoubtedly on the way, and the direction of travel will not always be clear in advance. That said, it seems likely that regulation will focus on transparency and disclosure for large AI businesses (meaning that strong governance and record-keeping practices will be vitally important from the get-go). For example, the OECD AI Principles (on which the UK principles proposed in the AI White Paper are based) emphasise that companies should keep good records of training data used and decisions taken, and that they should be able to explain the workings of their algorithms to regulators and the public. This is also a key concern for regulators like the UK’s ICO, FCA and CMA. Looking ahead, if AI becomes an increasingly large sector of the economy, then it could fall subject to further governmental intervention, export control laws and taxation.
- Reputational risk and litigation: although AI technology carries great promise, it is likely to be – and indeed already has been – the subject of intense concern in numerous areas, including accusations surrounding negative effects on employment, compute-hungry LLMs consuming carbon-intensive resources, biased AI gatekeepers discriminating against particular groups, and so on. One way to mitigate this will be for AI businesses to ensure that they have clear programmes in place regarding ethics, safety, transparency and non-discrimination. The recent letter from Elon Musk and others calling for a pause on AI development illustrates the level of scrutiny that there will be on AI activities.
- Liability for AI: as AI models become more advanced and autonomous and more parties are involved in the chain from the original developer to the end-user, there may be real uncertainty as to who in that chain is responsible for particular harms. Whilst the AI White Paper doesn’t propose any immediate changes, legislation like the EU AI Liability Directive already contains measures aimed at making it easier to attribute liability, and further developments in this increasingly complex area are likely.
- Shaping the future: as recommended by Sir Patrick Vallance in his Pro-innovation Regulation of Digital Technologies Review and confirmed in the AI White Paper, a regulatory sandbox is being established to allow businesses to test how regulation could be applied to AI products and services. Both reviews emphasise the importance of UK global leadership in AI innovation, so investors can expect an involved and collaborative approach from government and regulators, already evident in schemes like Future Fund: Breakthrough, a new £375 million programme to encourage private investors to co-invest with the government in high-growth innovative businesses. The pro-innovation environment is likely to lead to many opportunities for growing AI companies, and investors should look out for companies which are able to demonstrate that they can take advantage of the favourable conditions.
Being alert to the legal and regulatory landscape will be essential for investors to stay one step ahead in a generative AI battlefield that is becoming increasingly crowded and complex. More and more innovative use cases are being uncovered, and enhanced productivity will continue to change the commercial landscape in unpredictable ways. Creating lasting value in such a rapidly changing field will require the ability to spot and implement good ethical practices and maintain risk awareness. Investors who succeed in identifying opportunities with this in mind will be well placed to take advantage of the coming wave.
For more information on the risks and opportunities around AI, explore the different publications and podcasts from our Regulating AI series.