Simon Glairy is a preeminent figure in the Insurtech space, renowned for his ability to translate the chaotic landscape of digital threats into actionable risk management frameworks. With global cybercrime costs projected to surpass $10 trillion annually, his insights arrive at a critical juncture where technology outpaces traditional policy structures. In this discussion, we explore the shifting paradigms of liability, the hidden gaps in coverage, and the necessity of moving beyond static assessments to protect the modern enterprise.
Global cyberattacks have surged by 70% recently, with deepfakes and AI-driven phishing becoming major threats. How should brokers explain the liability of autonomous AI agents to clients, and what specific language must they look for in policy wording to ensure generative AI losses are actually covered?
The rise of autonomous AI agents introduces a fundamental question: is an AI agent considered a part of the insured entity? When an agent acting on behalf of a company creates a vulnerability or triggers a breach, brokers must clarify whether the policy definitions of “employee” or “authorized user” extend to these digital entities. We are seeing that 89% of organizations are already encountering risky AI prompts, which makes this a tangible liability issue rather than a theoretical one. Brokers need to scrutinize policies for specific generative AI triggers and ensure that the definition of “computer system” is broad enough to include third-party AI models and proprietary autonomous agents. It is vital to move away from ambiguous language because carriers are currently reassessing their portfolios to gauge systemic risk, meaning silent AI coverage is rapidly disappearing.
Business email compromise and funds transfer fraud frequently result in tens of billions in losses, yet these areas often face heavy sub-limits. What metrics should brokers use to stress test a client’s policy, and how can they effectively challenge insurer service standards during the claims process?
Business email compromise (BEC) has been responsible for tens of billions of dollars in losses over the last decade, yet even large enterprises with hundreds of millions in limits often find themselves crippled by small sub-limits on social engineering. To stress test a policy, brokers should run historical loss simulations against the specific sub-limits for funds transfer fraud to reveal exactly where the financial “cliff” resides. They must also demand clear service-level agreements (SLAs) regarding incident response times, as the first few hours of a BEC event determine if funds can be clawed back. When challenging insurers, brokers should use the carrier’s historical claims performance and their commitment to 24/7 technical support as leverage. If a carrier cannot guarantee immediate access to specialized forensic teams, the policy limit itself becomes secondary to the failure in service.
A significant portion of cyber losses now stem from third-party vendors and interconnected supply chains. What strategies can businesses implement to monitor these external risks in real time, and what specific data points should be prioritized when evaluating a partner’s cyber risk transfer capabilities?
We have entered an era where your security is only as strong as your weakest vendor, and many of these partners lack robust insurance of their own. Businesses must move toward continuous monitoring of their vendor ecosystem rather than relying on an annual “check-the-box” questionnaire. Priority should be placed on data points such as the vendor’s patch management velocity, their history of multi-factor authentication (MFA) enforcement, and their specific cyber risk transfer limits. Understanding “vendor risk transfer” means knowing exactly how much liability a partner can actually absorb before the loss falls back onto your own balance sheet. Real-time threat intelligence feeds should be integrated so that if a vendor’s credentials appear on the dark web, you can sever the connection before the contagion spreads.
Cyber events unfold in seconds, making static risk assessments nearly obsolete for modern enterprises. Could you walk through the essential components of a proactive incident response plan and describe an instance where specific early decisions radically changed the financial outcome of a data breach?
A proactive incident response plan must include a pre-vetted panel of forensic experts, legal counsel, and communication specialists who can be activated within minutes. The plan should specifically address deepfake-driven fraud, as 37% of fraud experts have already encountered voice deepfakes used to bypass traditional verification. I recall an instance where an organization faced a sophisticated BEC attempt; because their plan mandated an “out-of-band” verbal confirmation for any wire transfer over a certain threshold, they stopped a multi-million dollar theft in its tracks. Early decisions, such as the immediate isolation of affected segments and the early engagement of law enforcement, can reduce the average cost of a data breach, which IBM now estimates at $4.5 million. Speed is the only currency that matters during an attack, as a delay of even one hour can lead to a total encryption of the network.
With global cybercrime costs projected to exceed $10 trillion annually, the financial stakes for small and large businesses have never been higher. Beyond just buying insurance, what specific cybersecurity tools and continuous monitoring practices should brokers be mandating for their clients to remain insurable in this market?
Brokers can no longer just be “order takers”; they must mandate that clients adopt 24/7 continuous monitoring and endpoint detection and response (EDR) tools to even qualify for competitive terms. Static assessments are dead because the threat landscape shifts daily, and insurers are now looking for “cyber hygiene” metrics that prove a client is proactive. This includes mandatory MFA across all entry points, regular deepfake awareness training, and automated vulnerability scanning. Clients who refuse to integrate real-time threat intelligence will find themselves facing exclusions or premiums that are financially unsustainable. In this market, insurability is a reward for those who treat cyber risk as a 365-day operational priority rather than an annual renewal task.
What is your forecast for the evolution of cyber risk?
I predict that within the next 24 months, the industry will shift toward “living policies” where premiums are adjusted in real-time based on the actual security posture and data flows of the company. As AI-generated phishing and deepfakes become the standard method of entry, the focus of cyber insurance will move from purely financial indemnification to providing a comprehensive “resilience-as-a-service” model. We will see a consolidation of coverage where the lines between professional liability, crime, and cyber become increasingly blurred due to the actions of autonomous AI agents. Ultimately, the winners in this market will be those who view insurance as just one layer of a much larger, technology-driven defense strategy.
