Simon Glairy is a titan in the insurtech world, renowned for his surgical approach to evaluating how artificial intelligence rewrites the rules of corporate risk. As companies navigate a landscape where digital threats evolve by the hour, Simon’s perspective on the intersection of cybersecurity and insurance provides a vital roadmap for resilience. This discussion explores the dual nature of AI, the psychological gaps in executive preparedness, and the systemic vulnerabilities inherent in our increasingly interconnected global supply chains. We dive into the paradoxical role of AI as both a shield and a weapon, the harsh reality of financial recovery after a breach, and why businesses are feeling more confident even as specific risks like intellectual property theft and technological obsolescence are on the rise.
Roughly one-third of business leaders view AI as a primary threat, while a similar portion sees it as a tool for resilience. How can companies balance these opposing views, and what specific metrics should they use to measure if their AI investments are actually improving security?
The tension is palpable, with 33% of leaders fearing AI disruption while 35% are actively deploying it to bolster their defenses. To balance these views, companies must stop treating AI as a standalone “magic bullet” and instead integrate it into a layered defense strategy that acknowledges its vulnerabilities. Specifically, firms should track the reduction in “dwell time”—the period a hacker remains undetected—as a primary metric for AI success. When you consider that 33% of businesses are already increasing their cybersecurity spending, the goal should be a measurable decrease in manual intervention for routine anomalies. If your AI isn’t surfacing threats faster than your human analysts can, the investment is likely focused on the wrong area of the business.
While 80% of organizations expect AI to improve their bottom line, the technology introduces complex intellectual property and regulatory risks. What step-by-step strategies should a firm implement to protect its assets, and how do these risks change when AI is integrated into interconnected supply chains?
Even with 80% of executives eyeing a fatter bottom line, the “invisible” costs of IP theft and regulatory fines are a massive hurdle. A firm must first map its data lineage to ensure proprietary information isn’t being leaked into public LLMs or used to train external models without strict consent. In an interconnected supply chain, these risks are amplified because you are essentially inheriting the security posture of your weakest partner. We are seeing data risk evolve from a standalone issue into a systemic threat that can trigger operational and reputational collapses across entire sectors. Leaders need to implement rigorous “trust but verify” protocols for third-party AI integrations to ensure a breach at a vendor doesn’t become a breach of their own core assets.
A significant majority of executives believe they can fully recover financially from a cyber attack, yet many experts suggest this is a dangerous overestimation. Why does this confidence gap exist, and what are the most overlooked operational costs that typically hit a business during the recovery phase?
There is a jarring disconnect where 83% of U.S. executives believe they can fully bounce back financially from an attack, which is often a case of dangerous optimism. This confidence gap exists because leaders tend to focus on immediate insurance payouts rather than the long-tail operational damage that insurance might not fully cover. They frequently overlook the massive costs associated with technological obsolescence and the agonizingly slow process of rebuilding brand reputation after a public failure. A major breach doesn’t just pause operations for a day; it can erode customer trust for years, making that 83% figure feel more like a gamble than a calculated assessment of preparedness.
As digital interconnectivity increases, data breaches are no longer isolated incidents but systemic threats that spread rapidly across organizations. Can you share an example of how a single disruption can cascade through a supply chain and what practical steps leaders can take to contain such an event?
In our modern ecosystem, a single vulnerability in a shared software provider can paralyze thousands of businesses simultaneously, much like the high-profile incidents we witnessed in 2025. These events prove that disruption now spreads faster than most companies can identify the source, making traditional containment nearly impossible. Leaders must move beyond static firewalls and start planning for “realistic disruption scenarios” that assume a breach is already occurring. Practical containment involves having access to specialist expertise to isolate affected digital segments before the contagion reaches the core of the supply chain. It is no longer about preventing the first hit, but about stopping that hit from becoming a total system failure.
Recent trends show that while many businesses believe they are becoming more resilient, levels of unpreparedness for tech obsolescence and IP risks are actually rising. What specific training or specialist expertise should companies prioritize to close this gap, and how should their cybersecurity spending be allocated to reflect these emerging threats?
It is a striking paradox that while overall preparedness seemed to improve in 2025, it began creeping back up in 2026 as leaders struggled with the pace of technological change. Companies must prioritize hiring specialists who understand the “long tail” of cyber risk—experts who can audit legacy systems and assess how AI impacts intellectual property exposure. Spending should be shifted away from generic perimeter defense and toward dynamic incident response and deep-dive employee training. With 31% of leaders identifying cyber risk as their top worry for the coming year, the budget must follow the risk, investing in people who can interpret the complex signals that AI-driven attacks leave behind.
What is your forecast for the role of AI in corporate risk management over the next few years?
I expect AI to transition from a “new feature” to the actual nervous system of corporate risk management. We will see a shift where risk is no longer assessed annually but in real-time, as AI identifies systemic vulnerabilities across global supply chains before they are ever exploited. However, this will also lead to a “cyber arms race,” where the 35% of businesses currently using AI for resilience will have to constantly outpace evolving adversarial algorithms. Ultimately, the winners will be those who bridge the gap between their perceived resilience and the brutal, fast-moving reality of a digital-first economy.
