Simon Glairy is a distinguished authority in the insurance and Insurtech sectors, bringing a wealth of knowledge in risk management and AI-integrated risk assessment. As the landscape of digital threats shifts from isolated incidents to cascading systemic events, his expertise provides a critical lens for organizations trying to navigate the intersection of technology and financial protection. Currently focusing on the evolution of cyber resilience, he offers strategic insights into how businesses can stay ahead of increasingly sophisticated and automated threat actors.
The following discussion explores the rising costs of data breaches, which now average $4.7 million, and the shift toward “risk clustering” where failures in shared cloud or SaaS environments create sector-wide disruptions. We delve into the changing tactics of cybercriminals, who are moving away from technical exploits toward identity-driven attacks, and analyze why AI is better understood as an accelerator of existing threats rather than a brand-new category of risk. Finally, we examine how underwriting models are adapting to ensure that the insurance market remains capitalized against large-scale systemic shocks.
Breach costs are now averaging nearly $5 million per incident. How should executives balance immediate financial recovery with long-term security investments, and what specific metrics determine if a cyber insurance policy is actually keeping pace with these escalating losses?
The reality is that with the global average cost of a breach hitting $4.7 million in 2025, a reactive mindset is no longer a viable financial strategy. Executives must view insurance not just as a safety net for immediate recovery, but as a component of a broader resilience framework that rewards long-term security investments. To determine if a policy is keeping pace, leadership should look closely at “business interruption” limits and “contingent business interruption” coverage, as these are the primary drivers of loss when 59% of organizations are facing ransomware attacks. We also look at the ratio of recovery speed versus loss of revenue; if a policy doesn’t account for the cascading costs of downtime and loss of customer trust, it is fundamentally misaligned. The reasoning is simple: the immediate payout for a ransom or data recovery is often dwarfed by the long-term operational paralysis that follows a major event.
Recent incidents show that disruptions in retail and other industries often cascade through shared cloud providers and SaaS platforms. What specific steps can organizations take to reduce their “blast radius” during a sector-wide event, and how do you identify a high-risk shared dependency before it fails?
Reducing the blast radius starts with a brutal assessment of your digital architecture to identify where you have “concentrated” dependencies, such as a single identity provider or a common SaaS platform shared by your entire sector. We saw this in the retail sector with brands like Marks & Spencer and Harrods, where shared technology stacks can lead to correlated disruptions. To mitigate this, organizations should implement architectural segmentation and ensure they have offline or out-of-band backups that aren’t tied to the same primary cloud environment. Identifying high-risk dependencies requires looking beyond your own walls and mapping out “N-th party” risks—essentially, asking what happens if your provider’s provider goes down. It is about moving away from the assumption of 100% uptime and instead engineering for “graceful degradation” so a single failure doesn’t take the entire business offline.
Threat actors are increasingly pivoting from technical exploits to compromising identity tokens and trusted third-party integrations. What does a disciplined identity governance framework look like in practice, and how should token lifecycle management be integrated into daily security operations to prevent unauthorized access?
We are seeing a fundamental shift where cyber risk is becoming identity-driven; attackers are no longer “breaking in” as much as they are “logging in” using stolen credentials or compromised tokens. A disciplined identity governance framework requires strict, least-privilege access and continuous authentication rather than a one-time login. In practice, token lifecycle management means that session tokens must be short-lived, monitored for geographical anomalies, and instantly revocable the moment a device or user profile shows suspicious behavior. This must be integrated into daily operations through automated monitoring tools that flag whenever a third-party integration—like a marketing tool or a payroll API—requests more data than it needs. When you secure the identity, you remove the primary lever that threat actors use to scale their attacks across your environment.
While AI is often viewed as an accelerator of existing techniques like phishing rather than a new threat, it significantly increases attack velocity. How should security teams update their monitoring protocols to counter this speed, and what role does structured AI oversight play in protecting internal environments?
AI is an accelerant that allows attackers to operate at a scale and speed that human defenders simply cannot match manually. To counter this, security teams must move toward automated response protocols where the “detection-to-containment” window is measured in seconds, not hours. Structured AI oversight involves using these same technologies internally to monitor for automated phishing campaigns and to identify misconfigurations before they can be exploited. It is important to remember that most successful intrusions still rely on familiar weaknesses like credential theft; AI just helps the attacker find that open door faster. By implementing AI-driven monitoring, you can create a “defensive velocity” that matches the offensive speed of the threat actor, ensuring that reconnaissance attempts are blocked in real-time.
Digital ecosystems are becoming more concentrated, shifting the priority from pure prevention to operational resilience. How can a company effectively stress-test its business continuity plan for a systemic failure, and what are the hallmarks of a recovery mechanism that survives a total cloud outage?
To effectively stress-test for a systemic failure, you have to move beyond simple fire drills and simulate “total loss” scenarios where your primary cloud provider or identity management system is completely unavailable for 48 to 72 hours. A recovery mechanism that survives a total cloud outage is characterized by “redundancy of diversity”—meaning your backups aren’t just in a different region of the same cloud, but on a different platform or infrastructure entirely. The hallmarks of such a system include documented manual workarounds for critical business functions and a clearly defined priority list for which services must be restored first to maintain revenue flow. The goal isn’t pure prevention, which is becoming impossible in a concentrated ecosystem, but rather the ability to withstand a systemic shock and return to operations without total data loss.
As risk clustering becomes more common, how are underwriting models evolving to address correlated losses across different geographies? What specific data points are now essential for a business to provide during the application process to prove they are prepared for a large-scale systemic event?
Underwriting models are evolving to look at “aggregation risk,” which is how a single event can trigger losses across multiple policyholders simultaneously. At Tokio Marine HCC, for instance, we work with in-house security specialists to update models based on emerging threat intelligence rather than just historical data. For a business to prove readiness, they must now provide specific data points on their dependency mapping, such as a list of critical SaaS providers and their own internal failover protocols. We also look for evidence of regular stress-testing and “incident response readiness” scores that show the organization has a plan for when—not if—a shared service provider fails. This data allows insurers to diversify their portfolios across different industries and geographies to ensure they remain adequately capitalized.
What is your forecast for the evolution of systemic cyber risk through 2026?
By 2026, the defining theme of the cyber landscape will be resilience over pure prevention, as the concentration of digital ecosystems makes “zero-risk” an unattainable goal. We will likely see a move toward more standardized and disciplined identity governance as organizations realize that credentials are the new perimeter. Systemic risks will continue to grow as more industries adopt shared AI-driven tools, but the insurance market will adapt by using more sophisticated modeling to price these correlated risks accurately. Ultimately, the companies that thrive will be those that prioritize understanding their critical dependencies and have invested in robust, platform-agnostic recovery mechanisms. The focus will shift from keeping the “bad guys” out to ensuring the business can keep running while the “bad guys” are already inside.
