Nationwide Integrates AI to Transform Insurance Operations

Nationwide Integrates AI to Transform Insurance Operations

Simon Glairy stands at the forefront of the digital revolution within the insurance sector, bringing years of expertise in risk management and AI-driven assessment. As the industry shifts from traditional manual processing to sophisticated, technology-led frameworks, Simon has been instrumental in navigating the complexities of integrating large-scale AI initiatives. His work focuses on the intersection of human judgment and machine efficiency, ensuring that digital transformation serves as a catalyst for growth rather than a replacement for professional expertise. Today, he shares his perspective on how leading organizations are reshaping everything from software development to agribusiness underwriting through the lens of artificial intelligence.

You balance transformation across daily tasks, software engineering, and core insurance processes. How do you prioritize resources between these areas, and what specific metrics indicate that information-packaging tools are successfully building associate confidence? Please provide a step-by-step example of how these tools change a typical workday.

Prioritization is structured around three distinct pillars: supporting daily associate work, reimagining software engineering, and fundamentally reshaping the business of insurance itself. We measure success by how effectively tools like Glean pull together data from internal repositories and external sources to package insights that were previously scattered. When an associate starts their day, instead of manually searching through siloed databases, the AI-enabled tool automatically surfaces relevant documentation and history, reducing the cognitive load and building immediate confidence. This step-by-step shift moves the worker from a “searcher” to a “reviewer” within the first hour of their shift, allowing them to focus on high-value decision-making rather than data retrieval. By exposing teams to these tools early, we see a measurable increase in familiarity, ensuring they understand how to leverage AI as a primary enabler for their specialized tasks.

Implementing an AI-enabled version of the software development life cycle impacts everything from requirements to testing. How has this shift influenced the speed of deployment, and what protocols ensure that software engineers are using these tools to augment their judgment rather than simply automating code generation?

We have moved toward what we call the “AI DLC,” an AI-enabled version of the traditional software development lifecycle that spans every phase from requirements gathering through final testing. By integrating tools like GitHub Copilot and Playwright, we have seen a significant acceleration in our development phases, as AI handles the more repetitive elements of coding and testing scripts. To ensure these tools augment judgment rather than replace it, our protocols require engineers to treat AI-generated output as a draft that must undergo rigorous human review. We focus on reimagining the development team’s operation so that the technology serves to sharpen their skills, keeping the engineer central to the architectural integrity and logic of the software. This approach ensures that we are not just generating code faster, but building more resilient and sophisticated insurance platforms.

Commercial policies often involve complex, unstructured data and diverse property portfolios across multiple states. How does using AI to identify documentation gaps and summarize public information improve the responsiveness of underwriters, and what specific manual hurdles have been the most difficult to overcome during this transition?

In the world of agribusiness underwriting, the sheer volume of unstructured documentation regarding multiple vehicles and buildings across various states has historically been a massive manual hurdle. AI has transformed this by acting as a digital aide that sifts through public information and identifies missing fields in submissions before an underwriter even opens the file. This allows underwriters to engage in proactive outreach to agents much earlier in the process, significantly reducing the overall cycle time for a policy. One of the hardest hurdles was the time-consuming research of public company records, but by deploying minimal viable products for AI-assisted research, we have effectively removed that manual drag. The result is a more responsive underwriting department that can process a higher volume of submissions with much greater precision and speed.

Placing a senior underwriter in a leadership role for technology adoption helps bridge the gap between technical potential and practical application. What are the key advantages of this peer-led strategy over traditional training, and how do you handle the “hearts and minds” aspect of organizational change?

The primary advantage of a peer-led strategy is the immediate credibility it brings to the “hearts and minds” aspect of transformation, as associates are more likely to trust a colleague who understands their daily frustrations. By appointing a senior underwriter as an AI champion, we ensure that technical capabilities are translated into practical, floor-level workflows rather than remaining abstract concepts from the IT department. This champion works alongside a dedicated team of technologists and innovation experts to show how AI removes the “grunt work” rather than replacing the underwriter’s professional judgment. This strategy turns skepticism into enthusiasm because the change is led by someone who knows exactly what it takes to write a complex policy. It bridges the gap between high-level C-suite sponsorship and front-line execution, ensuring that the technology is actually adopted rather than ignored.

Moving toward a predict-and-prevent model involves integrating smart-home technology and real-time weather data to mitigate risk. What are the primary technical trade-offs when shifting from reactive claims to proactive prevention, and how does this evolution affect long-term customer loyalty and operational efficiency?

Shifting to a predict-and-prevent model requires a technical trade-off where we move away from simple historical data analysis toward managing real-time data streams from smart-home devices and weather alerts. This requires a much more robust infrastructure to handle constant data inputs, but the payoff is a dramatic improvement in operational efficiency by stopping losses before they happen. For the customer, this evolution changes the relationship from a transactional one based on reimbursement to a partnership based on protection and safety. We find that preventing a fire or a flood through early AI detection creates much stronger long-term loyalty than simply paying a claim after the damage is done. Ultimately, this proactive stance reduces the overall volume of claims, allowing the organization to focus resources on more complex risk management and business growth.

What is your forecast for the role of AI in the insurance industry?

I believe the industry is moving toward a future where AI becomes the standard operating layer that increases total capacity across every vertical, from personal lines to complex agribusiness. We will see a shift where underwriting decisions remain human-led, but the manual effort surrounding those decisions will be virtually eliminated through AI-driven summarization and gap analysis. This will enable companies to grow their business significantly without exponentially increasing their headcount, as associates become more empowered and efficient. Eventually, the “predict-and-prevent” model will become the industry norm, making insurance a proactive service that is integrated into the very fabric of smart technology and real-time risk mitigation.

Subscribe to our weekly news digest.

Join now and become a part of our fast-growing community.

Invalid Email Address
Thanks for Subscribing!
We'll be sending you our best soon!
Something went wrong, please try again later