Simon Glairy is a distinguished authority in the intersection of insurance and emerging technologies, currently serving as a lead consultant on AI-driven risk assessment. With a career dedicated to helping small and mid-sized enterprises navigate the complexities of digital transformation, he provides deep insights into how intangible risks manifest in real-world liabilities. In this conversation, we explore the rise of “silent” AI exposure, the looming shift in industry standard exclusions, and how businesses can protect themselves from the legal pitfalls of generative content and automated physical systems.
With many small businesses adopting artificial intelligence before insurance policies explicitly address the technology, how does this “silent” exposure compare to the early days of cyber risk? What specific steps should an SME take to identify where their current coverage might be unintentionally protecting—or failing—them?
The parallels are striking; we are essentially watching the “cyber 2.0” playbook unfold. About 15 or 20 years ago, cyber risks were invisible because they were buried in traditional policies that didn’t mention the internet, and today, AI is in that exact “silent” phase. Currently, about 74% of small businesses are already using AI tools, yet most of their policies—whether it’s Professional Liability or D&O—haven’t been updated to account for it. To find the gaps, an SME must perform a “usage audit” to see exactly where AI touches their workflow and then cross-reference that with their policy’s definition of a “wrongful act.” If your policy doesn’t explicitly exclude AI, you might have unintentional coverage for now, but you are essentially operating in a period of ambiguity where the insurer hasn’t priced the risk correctly, and that safety net won’t last forever.
Generative AI tools often produce content that can infringe on copyrights or generate inaccurate “hallucinations.” What are the immediate legal consequences for a business using these tools for marketing, and how can they verify that AI-generated outputs won’t lead to defamation or intellectual property claims?
The immediate legal threat is a “personal and advertising injury” claim, which can be devastating for a small brand’s reputation and balance sheet. Because AI models are trained on massive datasets, a chatbot might spit out a marketing slogan or an image that is nearly identical to a copyrighted work, leading to a direct infringement suit. Beyond IP theft, there is the “hallucination” factor where an AI might confidently state a falsehood about a competitor or a customer, triggering a defamation claim. To mitigate this, businesses need to implement a “human-in-the-loop” protocol where every single piece of AI-generated content is vetted by a staff member for factual accuracy and originality before it goes live. You cannot simply trust the machine’s output; you must treat it like a draft from a very creative but potentially unreliable intern.
New industry exclusions for AI-related losses are set to take effect by 2026. How will these changes force businesses to rethink their liability strategies, and what should a company look for when trying to negotiate affirmative coverage back into their professional or general liability policies?
The year 2026 marks a massive shift because the Insurance Services Office (ISO) has introduced specific exclusions that will essentially “carve out” AI risks from standard policies. This forces a very honest and perhaps difficult conversation between the business and the broker, as the “silent” coverage will vanish, leaving a gaping hole in their protection. When renegotiating, businesses should look for “affirmative coverage”—this means the policy explicitly states it will cover AI-related incidents rather than just remaining silent. You want to negotiate for endorsements that specifically name bodily injury, property damage, and advertising injury caused by AI systems. This ensures that even as the industry tightens its belt, your specific use cases, whether it’s a customer service bot or a data analysis tool, remain financially protected.
As AI moves from digital content into physical systems like delivery robots or smart building equipment, what new categories of bodily injury emerge? How do the underwriting challenges for these tangible risks differ from the intangible risks of data misuse or reputational harm?
We are entering a phase where digital errors have physical consequences, such as a delivery robot malfunctioning and colliding with a pedestrian or a smart building system failing to trigger a fire alarm. These “tangible” risks are terrifying for underwriters because they involve physical harm, which is often much more expensive to settle than a reputational smear. The challenge lies in the “bridge” between the software’s decision-making and the hardware’s action; if a robot makes a wrong turn, is it a software bug, a hardware failure, or a mapping error? Unlike data misuse, which has a predictable “per-record” cost, a single bodily injury claim from an autonomous system can result in multi-million dollar payouts. Underwriters are currently struggling with the frequency of these events because, while we can imagine the severity, we don’t yet have decades of data to know how often these robots will actually fail.
While initial litigation has targeted AI developers, the focus is shifting toward the businesses that deploy these systems. How should an SME evaluate their shared liability when using a third-party chatbot, and what internal protocols can mitigate the risk of being held accountable for the AI’s behavior?
The “honeymoon phase” where only the big developers like OpenAI or Google were sued is ending; the legal crosshairs are now moving toward the end-user. If your business deploys a third-party chatbot and that bot gives harmful advice or uses biased language, you are the one with the direct relationship with the victim, making you the primary target for a lawsuit. SMEs must scrutinize their vendor contracts to see if there are any indemnification clauses—essentially checking if the AI developer will pay for your legal fees if their tool causes you to get sued. Internally, you need a strict AI Governance Policy that dictates what the AI is allowed to discuss and ensures that 91% of your employees who expect to use these tools in the future are trained on the risks. Shared liability is a reality, and you cannot simply point the finger at the software provider when it is your brand’s logo at the top of the chat window.
What is your forecast for AI liability?
I predict that within the next three to five years, we will see AI liability transition from a niche add-on to a mandatory, standalone pillar of commercial insurance, much like cyber insurance did over the last decade. As we approach 2026, the industry will move away from “all-risk” ambiguity and toward highly specified “named-peril” AI policies that require businesses to prove they have safety protocols in place before they can even get a quote. We will likely see a surge in “failure to perform” claims where AI systems marketed as efficiency-boosters actually cause business interruptions or financial loss. Ultimately, the winners will be the companies that treat AI risk not as a tech problem, but as a core boardroom priority, ensuring their insurance towers are rebuilt to withstand the unique pressures of an automated world.
