Simon Glairy is a distinguished leader in the insurance technology sector, renowned for his strategic vision in integrating artificial intelligence into risk management frameworks. With a background that spans decades of industry evolution, he specializes in the transition from traditional automation to agentic systems that offer transaction-level autonomy. His work focuses on the delicate balance between technical scale and human accountability, ensuring that as insurers embrace real-time data and AI-driven decisioning, they maintain the capital discipline and trust that define the industry.
In this conversation, we explore the shifting landscape of modern risk and the mechanics of deploying autonomous agents within a highly regulated environment. We delve into the transformation of underwriting and claims through parallel processing and real-time data synthesis, the rigorous requirements for auditability in AI-driven decisions, and the evolving role of human experts who must now pivot from routine tasks to high-level governance and complex judgment.
Modern risk environments involve rapidly escalating climate events and shifting cyber exposures. How can dynamic control frameworks replace static rulebooks to monitor deviation patterns in real time, and what specific triggers should signal a transition from autonomous agent logic to secondary human oversight?
The reality of today’s volatility means that a static rulebook is obsolete the moment it is printed; we need frameworks that act more like a living nervous system. By implementing dynamic control frameworks, we can monitor transaction-level data against real-time global shifts, such as a sudden cluster of cyber incidents or a rapidly moving weather front. The system is programmed to flag deviation patterns—small anomalies that, when aggregated, suggest a shift in the underlying risk landscape. We set specific thresholds for this transition: any decision involving a claim denial, an allegation of fraud, or a breach of pre-defined capital limits immediately triggers a “stop-loss” for the AI. This ensures that while the machine handles the 90% of routine processing, the human expert is called in exactly when the ethical or financial stakes exceed the agent’s calibrated autonomy.
While autonomous systems can handle routine tasks like address changes via plain-language emails, material changes to coverage still require validation. What logic should determine when a transaction impacts pricing discipline, and how can these workflows be structured to ensure routine adjustments flow straight through without human intervention?
The logic is built on a clear distinction between administrative updates and premium-bearing risks. For instance, if a customer sends a plain-language email to change a beneficiary’s address, the agent identifies this as a non-premium-bearing change and executes it instantly, providing a seamless experience. However, the workflow is structured with an embedded validation layer that scans for “materiality”—if that address change moves a property into a high-risk flood zone or if a requested limit increase alters the risk-adjusted economics, the “straight-through” path is blocked. This creates a tiered system where routine adjustments flow without friction, but any request with capital or risk implications is automatically routed for internal human review to preserve pricing discipline.
Agentic AI allows underwriting to evolve from a periodic gatekeeping function into active portfolio stewardship. How do you integrate up-to-the-minute global data to recalibrate risk appetite across different geographies, and what metrics best demonstrate the resulting improvements in capital deployment and premium growth?
We are moving away from the “annual renewal” mindset toward a model of continuous recalibration. By connecting agents to third-party data feeds and internal claims monitors, we can see loss experiences as they emerge—perhaps a 15% uptick in logistics claims in a specific European corridor—and adjust our risk appetite in that geography by noon the same day. Underwriters become stewards who oversee these shifts, using metrics like the loss ratio trend and the speed of capital redeployment to measure success. When we can tighten or loosen capacity based on morning data rather than last year’s report, we see a direct impact on premium growth because we are writing more of the “right” risk at the “right” price in real time.
In claims processing, steps that were once sequential can now occur in parallel through the synthesis of images and historical precedents. How does this orchestration accelerate determinations for low-value claims, and what specific guardrails prevent the system from overstepping when handling sensitive issues like fraud allegations or claim denials?
Traditionally, a claim moved like a baton in a relay race, but agentic AI allows us to run those segments simultaneously: the system can analyze damage photos, verify policy terms, and check historical precedents all at once. For low-value, high-volume claims, this reduces the settlement time from days to minutes, which is a massive win for customer satisfaction. To prevent overstepping, we build “hard-stop” guardrails into the orchestration layer—if the AI detects a 0.5% probability of a fraudulent pattern or if the logic leads toward a denial, the agent is restricted from making a final determination. It can prepare the file and summarize the evidence, but the high-consequence decision remains a human-led action to ensure empathy and legal compliance.
Auditability requires capturing the full reasoning pathway of an AI-driven decision, including model versions and retrieval sources. What step-by-step processes are necessary to reconstruct a decision chain for an auditor, and how do you manage the trade-offs between system autonomy and the need for total traceability?
Trust in insurance is non-negotiable, so we maintain a rigorous “black box” recorder for every autonomous action. The process involves logging the exact model version used, the specific prompts provided, the third-party data feeds ingested, and the specific policy rules applied at that micro-second. If an auditor asks about a decision made on February 21, 2026, we don’t just show the result; we replay the entire reasoning pathway step-by-step to show why the agent arrived at that conclusion. We manage the trade-off by accepting that traceability might slightly increase computational costs, but it is a necessary investment to ensure that autonomy never comes at the expense of accountability.
As AI assumes more routine responsibilities, human roles are shifting toward governance and complex exceptions. How should insurance firms retrain adjusters and underwriters for these judgment-intensive tasks, and what does a “four-eyes principle” look like when applied to a high-consequence autonomous decision?
We are shifting our workforce from being “processors” to “governors.” Retraining involves teaching underwriters how to manage a fleet of digital agents and how to interpret the high-level pattern recognition that AI provides. The “four-eyes principle” in this new world means that for particularly sensitive or high-value cases, the AI provides the initial deep-dive analysis, and then two human experts—or one expert and a secondary validation agent—must sign off on the final intent. This ensures that while technology delivers the scale, the human hand stays firmly on the tiller for the 5% of cases that define our brand’s reputation and financial stability.
Connecting claims, underwriting, and product design through a coordination layer allows for continuous cross-functional feedback. Can you provide an example of how real-time loss experience data should refine pricing insights, and what practical steps ensure these agents stay aligned with the enterprise’s broader strategic intent?
The coordination layer acts as a feedback loop that bridges the traditional silos of the insurance office. For example, if claims agents detect a recurring failure in a specific new model of industrial sensor, that data is instantly fed back to the underwriting agents to adjust pricing for any policy covering that equipment. To keep these agents aligned with strategic intent, we use “advanced injunctions”—higher-level mission parameters that define the enterprise’s risk-reward boundaries. This ensures that while individual agents are optimizing for their specific tasks, they are all pulling in the direction of the company’s broader three-year or five-year financial goals.
What is your forecast for Agentic AI in the insurance industry?
I believe that within the next few years, the most successful insurers will not look like technology companies that happen to sell insurance, but rather like specialized insurance powerhouses that use AI-orchestration to achieve unprecedented precision. We will see a definitive shift from being passive payers of loss to becoming real-time orchestrators of risk mitigation and loss prevention. The technology will handle the heavy lifting of data and routine execution, but the core of the business will remain human-centric. My forecast is that this transition will lead to a significant productivity uplift and a much more resilient global insurance market, where humans and agents work in tandem to navigate an increasingly complex world.
