Rising Claims Complexity Becomes a Strategic Capital Constraint

Rising Claims Complexity Becomes a Strategic Capital Constraint

With a career spanning the evolution of global risk, Simon Glairy has established himself as a preeminent voice in the intersection of Insurtech and high-stakes claims management. As carriers navigate an era defined by extreme severity and legal volatility, his insights into AI-driven risk assessment provide a roadmap for maintaining capital efficiency in an increasingly unpredictable landscape. We sat down with Simon to discuss how the shift from frequency-based modeling to complex severity management is redefining the industry’s strategic priorities across the US, Canada, and the UK.

Severity volatility is now outpacing rate adequacy, driven largely by social inflation and litigation funding. How are these factors specifically reshaping your excess-layer pricing strategies, and what specific data points are you prioritizing to predict nuclear verdicts before they hit the books?

The reality is that traditional pricing models are struggling to keep pace because severity has become a more material capital variable than frequency. In response, we are shifting our excess-layer strategies to focus on venue volatility and the sophistication of the plaintiff bar, moving away from simple historical loss ratios. We prioritize AI-enabled severity prediction tools that flag early indicators of litigation funding, which often pushes casualty losses far beyond our original modeled expectations. By analyzing document intelligence from the first notice of loss, we look for specific linguistic markers in legal filings that suggest a high probability of a nuclear verdict. This allows us to reprice the casualty tower not just for known risks, but for the inherent uncertainty of modern social inflation.

Secondary perils and climate-driven events often lead to multi-year rebuild timelines and disputes over code upgrades. How can carriers restructure their reinsurance negotiations to account for this volatility, and what specific steps ensure that claims data remains defensible during high-pressure renewals?

Reinsurance is no longer just a capital buffer; it has transformed into a rigorous data and workflow challenge where transparency is the primary currency. To account for the accumulation of mid-sized climate events, carriers should move toward more flexible, multi-year, or structured reinsurance solutions that better reflect the reality of rising labor costs and extended business interruption durations. The first step to ensuring defensible data is to implement granular tracking of “soft costs” and code upgrade disputes, as these are the primary drivers of loss development in property claims. Second, carriers must utilize scenario modeling to show reinsurers how high-tech vehicle repairs and supply chain constraints are being managed in real-time. Finally, providing a clean, digitized audit trail of every claim decision proves to reinsurers that the volatility is understood and controlled, leading to much better terms during the renewal cycle.

High-tech vehicle repairs and early attorney involvement in casualty cases are creating significant structural headwinds. What operational changes are necessary to mitigate the rising cost of these portfolios, and how should organizations balance automated medical reviews with the need for high-judgment human expertise?

The most critical operational shift is the integration of automated triage systems that can identify attorney involvement or complex medical needs the moment a file is opened. We are moving toward a model where automated medical reviews handle the heavy lifting of data extraction from thousands of pages of records, which significantly reduces the operational drag on our adjusters. This technology doesn’t replace the human; instead, it frees our high-judgment experts to focus on the 2025-era challenges of negotiation and litigation strategy. By automating low-value tasks, we ensure our most experienced people are only touching files that require a nuanced understanding of medical causation or legal liability. This balance is the only way to mitigate the structural costs of modern high-tech vehicle repairs and the aggressive tactics of the plaintiff bar.

Generative and agentic AI are transitioning from experimental pilots to core operational tools for document intelligence and triage. How do you implement these technologies while meeting tightening regulatory expectations for explainability, and what metrics prove that AI is actually reducing capital friction?

Implementing generative AI requires a shift from experimental “sandboxes” to embedding these tools directly into the daily claims workflow, backed by a robust governance framework that explains every automated flag. To satisfy regulators, we maintain a “human-in-the-loop” requirement for any decision that impacts a claimant’s outcome, ensuring that the AI provides the rationale behind its insights from engineering or legal documents. We measure success through specific metrics like the reduction in reserve uncertainty and the speed of early-stage triage, which directly lowers the capital friction associated with long-tail claims. When we can prove a decrease in loss development volatility over a six-month period, we have tangible evidence that AI is enhancing our capital efficiency. It is about moving beyond the hype to show that AI-driven insights result in more accurate, defensible reserves.

Specialty buyers increasingly view a carrier’s complex-claims capability as a primary differentiator rather than just a back-office function. How does superior claims performance directly influence your underwriting appetite, and what practical steps can teams take to feed large-loss learnings back into product design?

In the E&S and specialty markets, sophisticated buyers now choose capacity based on how we handle their worst days, not just the premium we charge. Superior claims performance acts as a front-end underwriting lever; when we manage severity effectively, it gives our underwriters the confidence to be more aggressive in sectors that others might avoid due to volatility. To practicalize this, we have established a formal feedback loop where our complex-claims teams meet monthly with product designers to review “lessons learned” from recent large losses. We analyze whether the loss was a result of a policy wording gap, a specific geographic trend, or an emerging secondary peril that wasn’t properly priced. This integration ensures that our underwriting appetite is constantly refined by the real-world data coming out of our claims department.

Moving from traditional claims handling toward a model of “risk architecture” requires a multidisciplinary approach. How do you integrate engineering, data science, and behavioral expertise into a cohesive team, and what does the step-by-step workflow look like when managing a cross-border loss?

Becoming a “risk architect” means breaking down the silos that have traditionally isolated claims adjusters from technical experts. For a cross-border loss, our workflow begins with an immediate multidisciplinary huddle involving a forensic engineer to assess physical damage, a data scientist to model potential loss development, and a behavioral expert to advise on local litigation culture. The second step involves using AI to synthesize these various perspectives into a single “live” document that tracks the claim across the US, Canada, and the UK simultaneously. Third, we apply consistent valuation frameworks to ensure that a loss in London is handled with the same technical rigor as one in New York. This cohesive approach allows us to treat complex claims as a strategic, data-rich discipline rather than a reactive back-office function.

What is your forecast for claims complexity?

I forecast that claims complexity will become the single most significant capital constraint for the insurance industry over the next decade. As we look toward 2030, the organizations that will dominate the market are those that stop viewing claims as a cost center and start treating them as a sophisticated data engine. We will see a widening gap between “technical leaders” who use agentic AI to manage severity and “laggards” who are overwhelmed by the rising tide of social inflation and climate-driven volatility. Ultimately, the ability to architect risk through deep technical expertise and AI integration will be the primary factor that determines which carriers remain solvent and profitable in an increasingly complex global landscape.

Subscribe to our weekly news digest.

Join now and become a part of our fast-growing community.

Invalid Email Address
Thanks for Subscribing!
We'll be sending you our best soon!
Something went wrong, please try again later