The proliferation of artificial intelligence across the business landscape has created a complex and unforeseen challenge for the traditional insurance industry, which now finds itself in a pivotal, transitional phase. Insurers are grappling with how to define, categorize, and underwrite a novel class of AI-driven liabilities that strain the boundaries of existing coverage frameworks like commercial general liability and cyber insurance. As the industry cautiously navigates this uncharted territory, an organization’s proactive stance on AI governance and internal risk management is rapidly becoming a critical determinant of its future insurability. This new reality is forcing both businesses and their insurance partners to rethink the very nature of risk in an era where algorithms can generate both unprecedented value and catastrophic liabilities, often from the same line of code. The path forward remains uncertain, but it is clear that the old rulebooks no longer apply to this new technological frontier.
The Uncharted Territory of AI-Driven Claims
A New Frontier of High-Stakes Incidents
A series of high-profile events has starkly illustrated the tangible and varied nature of emerging AI liabilities, moving them from theoretical concerns to costly realities. For instance, Air Canada was legally compelled to honor a discount that its customer service chatbot had incorrectly promised, setting a significant precedent for corporate accountability over AI-generated errors. In a more malicious scenario, an employee at the multinational firm Arup was manipulated into transferring millions of dollars after a scammer deployed convincing deepfake videos of senior executives, showcasing the potential for AI in executing sophisticated fraud. Furthermore, Google is facing a lawsuit after its AI Overviews feature erroneously implicated Wolf River Electric in a legal case, leading to direct financial harm when a customer canceled a contract based on the false information. These examples collectively underscore the diverse and significant liabilities stemming from AI systems, which span a wide spectrum from simple operational mistakes to complex, targeted criminal activities that exploit the technology’s persuasive power.
The financial and legal ramifications of these incidents are becoming increasingly clear, establishing that AI-related risks are not a monolithic category but a multifaceted collection of potential failures and exploits. The Air Canada case highlights liability for misinformation, where the company was held responsible for the outputs of its automated agent as if it were a human employee. The Arup deepfake incident reveals a vulnerability to advanced social engineering that bypasses traditional cybersecurity defenses, targeting human trust with hyper-realistic fabrications. Meanwhile, the Google AI Overviews issue points to the significant reputational and financial damage that can result from algorithmic “hallucinations” or errors in data aggregation presented as factual summaries. This broad spectrum of potential claims complicates the underwriting process, as each type of incident requires a different lens for risk assessment and loss calculation, pushing insurers to develop a more nuanced understanding of how different AI applications create unique exposure points for the businesses that deploy them.
An Industry in a State of Flux
The insurance sector is currently wrestling with fundamental questions about how to manage these novel risks, a situation that experts like Thomas Bentz, a partner at the law firm Holland & Knight, describe as a period of significant “confusion and growth.” The primary challenge lies in categorization, as it remains profoundly unclear where AI-related claims should reside within existing policy structures. If an AI program causes physical harm, for example, does the claim fall under a Commercial General Liability (CGL) policy, or does it belong to a specialized cyber insurance policy? Bentz notes that many AI-related incidents create significant “gaps” that are not adequately addressed by either type of coverage. This deep-seated ambiguity makes it exceptionally difficult for insurers to accurately price the associated risk and design appropriate policies, leading some to become wary of offering any AI-related coverage and even prompting attempts to explicitly exclude such incidents from standard corporate policies.
This contemporary challenge closely mirrors the early evolution of cyber insurance, a field that has only existed in a substantive form for approximately two decades and has been treated as a comprehensive enterprise risk solution for merely the last ten years. The industry required a significant volume and history of unique claims over an extended period to develop a clear understanding of what was covered, what required special add-ons or endorsements, and what specific risks it was unwilling to insure at any price. Artificial intelligence, with its recent and rapid ubiquity, means that insurers completely lack this essential historical data. This absence of a solid actuarial foundation leaves them without the statistical models and precedent needed to confidently build new policies. They are effectively trying to navigate a new and volatile risk landscape without a map, forcing a reliance on caution and experimentation while the true scope of AI liability continues to unfold in real-time.
Forging a Path Forward in Underwriting
The Rise of AI Governance in Risk Assessment
In the absence of established industry standards for AI insurance, a clear directional trend is emerging: underwriters are placing greater scrutiny on a company’s internal AI governance and risk management practices. According to Panos Leledakis, founder and CEO of the IFA Academy, while the industry remains in an exploratory phase, insurers are increasingly considering a company’s internal AI protocols as a key underwriting factor. The evaluation focuses on several core areas, including the existence of a basic AI governance framework, the implementation of clear policies on permissible AI usage, and the presence of robust data handling and access control protocols when AI is involved. Furthermore, comprehensive employee training on the potential for AI misuse and related social engineering tactics, such as deepfake fraud, is becoming a crucial element. Leledakis stresses that this evaluation is currently “directional rather than formalized,” meaning it is not yet a standardized, decisive criterion but is an increasingly important part of internal risk assessment conversations among underwriters.
Rather than attempting to create a distinct and separate “AI incident” category from scratch, many insurers are taking a more pragmatic approach by quietly treating these events as extensions of familiar risk categories. This strategy allows them to manage emerging threats within established frameworks for cyber, fraud, professional liability, and errors and omissions while they continue to develop more specialized solutions. In tandem with this approach, insurers are beginning to look for specific risk mitigation strategies from their clients. Leledakis notes a growing push for stronger multi-factor authentication and mandatory call-back protocols as countermeasures to deepfake-driven fraud. There is also an increased demand for “human-in-the-loop” oversight for AI-driven client communications and explicit restrictions on using public large-language models with sensitive corporate or customer data. A heightened emphasis on the rigorous logging, auditing, and disclosure of AI system outputs is also becoming a key consideration in the underwriting process.
Pioneering New Coverage and Adapting to New Threats
Certain AI-driven risks are already becoming quantifiable, providing insurers with the data needed to begin crafting specific coverage. A 2025 report from the digital risks insurance company Coalition, for example, specifically identified chatbots as a significant “emerging risk.” An analysis of their cyber insurance claims revealed that chatbots were cited in 5% of all web privacy claims. These claims frequently followed a repeatable legal strategy, alleging that the chatbot provider intercepted communications without user consent, thereby violating “digital wiretapping” statutes like the Florida Security of Communications Act. This development is crucial because it demonstrates how old laws are being repurposed to address new technological risks, creating a predictable and scalable legal threat. For insurers, this predictability is invaluable, as it allows them to model the risk, price it with greater accuracy, and begin the process of developing tailored underwriting standards and policy language for this specific exposure.
Despite the general uncertainty, some forward-thinking insurers are moving to address distinct AI risks head-on by developing specialized products. Coalition, for instance, announced it will offer an endorsement to its cybersecurity policies specifically covering deepfake-related incidents that lead to reputational harm. This coverage is designed to be comprehensive, including response services like forensic analysis, legal support for content takedown requests, and professional crisis communications assistance. This marks one of the first concrete steps by an insurer to create a product tailored to a unique AI-driven threat. As Daniel Woods, a principal security researcher with Coalition, highlights, this kind of innovation is essential because AI risks challenge traditional security paradigms. Standard digital security focuses on protecting networks and endpoints, but a threat like a deepfake requires no network breach—only a few seconds of publicly available video or audio of a corporate leader. This creates a vulnerability that traditional measures cannot address, necessitating a new approach to both risk management and insurance.
