This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
THE LENS
Digital developments in focus
| 3 minute read

How GenAI risks are reshaping the E&O insurance market insurance industry

Ross Francis-Pike, Fernanda Dias, Natalie Donovan 

The insurance market’s understanding of the risks around generative AI (GenAI) is developing, particularly as its use can impact multiple policies. 

The main challenge for the sector is how “carriers” - the companies that create and sell insurance policies - can effectively address the risks introduced by GenAI without taking on excessive exposure. Some insurers have opted to add “Absolute AI Exclusions” to their policies, meaning any claim involving the use, development, or deployment of AI is not covered. For example, Berkley has recently applied such exclusions to its Directors’ and Officers’ (D&O), Errors and Omissions (E&O), and Fiduciary Liability products. Others are considering whether more targeted coverage could focus on how well a company manages and governs its use of AI. This would allow insurers to tailor coverage and pricing based on a company’s specific controls and risk management practices, rather than excluding all AI-related risks 

Lloyds report

A recent Lloyd’s Market Association (LMA) report has examined the complexities, disputes, and aggregation risks introduced by GenAI, particularly in the E&O market. The core issue discussed in the report is that traditional E&O policies were designed to address human error, not algorithm-driven failures. The report notes that:

  • Hallucinations: AI models hallucinate (i.e. they generate incorrect or fabricated information). Where a professional relies on GenAI in the provision of their advice or services they could ultimately be liable if they rely on erroneous GenAI outputs.
     
  • Confidentiality and data breach: GenAI introduces a significant data protection and confidentiality risk. Professionals handle sensitive client information, and the report warns that submitting such private or confidential documents into a public AI model could potentially breach the professional’s duty of confidentiality. This would expose firms to regulatory fines and contractual/tortious liability.
     
  • Systemic risk and aggregation: While an underwriter may be prepared to cover human failure in not detecting an error arising from an output from a Large Language Model, it will not provide cover where this is due to an issue with the insured’s systems or software. The latter can occur on a systemic scale and would generally fall to IT or cyber policies. This scenario introduces a significant challenge under E&O aggregation clauses as losses related to this kind of incident could be treated as one claim, subject to a single limit and deductible, or as multiple separate claims. If an AI tool has repeatedly caused an error, deciding whether an aggregation or series clause applies could be challenging.

Navigating the new landscape

The LMA report serves as a warning that the E&O insurance market faces increasing uncertainty as the use of GenAI increases. 

While some insurers are struggling to define where their E&O coverage ends and other policies begin, others are introducing broad, "Absolute" AI Exclusions. Products are also being developed to try to fill the gap created by such exclusions. Some of these are introducing endorsements that restore coverage for AI under specific, well-defined conditions, while others are developing standalone AI liability products designed to address and price the unique risks associated with AI. These policies aim to clarify the boundaries of responsibility between insurers and policyholders, ensuring that coverage aligns with how AI is actually used and managed within organisations.

Given these developments, the report flags the need for clear policy wording and careful consideration of how GenAI is used within insured firms. Pricing models may evolve to reflect governance quality rather than relying solely on sector benchmarks. Carriers will also likely begin modelling systemic risk – where a single flaw in a widely used AI platform could trigger simultaneous claims – and strengthening aggregation clauses to manage this exposure.

In terms of what steps organisations can take to help ensure they get the best coverage possible, the report makes one point clear: professional liability coverage will increasingly depend on how firms govern their use of AI. Insurers are moving beyond asking whether AI is used; they now seek detailed information on how it is deployed and controlled. Underwriters are beginning to request evidence of governance through questionnaires that cover acceptable use policies, staff training, and human-in-the-loop protocols. Governance will therefore increasingly influence premiums and coverage terms. Firms that demonstrate strong controls are therefore likely to secure better pricing and terms compared to those without such measures. 

Sign up to receive the latest insights. Click here to subscribe to The Lens Blog.

Tags

ai, digital regulation