Across small and mid-sized businesses, AI now powers routine work from marketing to support, yet liability gaps have widened faster than policies evolved and exclusions multiplied. The shift has been stark: as generative systems draft copy, chatbots triage tickets, and recommendation engines shape sales, disputed outcomes surface in the form of faulty outputs, biased selections, and misclassifications. Traditional policies were never designed to parse whether a flawed decision came from human judgment or a machine’s inference. In that vacuum, carriers introduced AI exclusions, leaving insureds to discover in claims time that “silent” language may not respond. Into this tension, Counterpart introduced explicit protections intended to meet day-to-day AI use rather than experimental edge cases.
Addressing Embedded AI Risk
Counterpart extended Affirmative Artificial Intelligence Coverage and added a Technology Errors & Omissions insuring agreement to its Miscellaneous Professional Liability and Allied Health products, framing both as a response to AI woven into ordinary operations. The endorsements were positioned to address claims tied to in-house and third-party systems, reflecting how tools are procured and integrated across stacks. Rather than depend on legacy E&O, D&O, cyber, or CGL forms to stretch, the approach brought clear triggers for losses stemming from algorithmic outputs and the professional services that rely on them. That clarity matters when a diseased image is misread by a triage tool, a résumé screener skews a hiring slate, or a predictive score misprices a service—issues that have already surfaced in litigation, not hypotheticals.
Moreover, the structure acknowledged that coverage disputes often hinge on definitions and carve-backs, not only on facts. Explicit grants were designed to reduce the gray zone created by modern exclusions, especially where vendors supply models as black-box services.
Market Signals And What Comes Next
The broader market has been shifting from implied to affirmative coverage as AI exclusions harden standard forms, and brokers increasingly seek endorsements that reflect daily workflows rather than exotic tech deployments. Counterpart highlighted distribution at scale—over 28,000 policies through 2,800 brokers—and indicated plans to extend availability across professions and risk classes, suggesting a playbook other carriers may emulate. For insureds, the takeaway centered on mapping AI touchpoints across service delivery, vendor reliance, and customer interaction, then aligning those points with explicit policy language that names the risk.
In practice, the next step involved cataloging model use, documenting human-in-the-loop controls, and verifying that contracts with AI vendors include indemnities that dovetail with new endorsements. Brokers translated that inventory into terms, limits, and retro dates that reflected operational reality rather than abstract exposure. By separating algorithmic output risk from more general tech liability, the updated coverage set a blueprint for cleaner claims handling and fewer surprises at tender. As exclusions continued to tighten, affirmative grants offered a pragmatic path and signaled where specialty liability had been headed.
