The insurtech landscape is undergoing a profound transformation, moving away from the “growth at any cost” mentality that defined the last decade toward a more disciplined, value-oriented era. Simon Glairy, a distinguished expert in risk management and AI-driven assessment, joins us to discuss how this shift is redefining the relationship between technology and traditional insurance principles. In this conversation, we explore the evolving funding environment, the necessity of building defensible moats through proprietary data, and the strategic divide between companies with sustainable unit economics and those struggling to adapt. Glairy provides a deep dive into how the industry is navigating the rapid commoditization of AI models and the increasing friction caused by diverging international regulations, offering a roadmap for what it takes to succeed in today’s unforgiving market.
The shift toward rewarding revenue and clear paths to profitability marks a departure from previous growth-heavy strategies. How are stricter diligence standards changing how startups prepare for funding rounds, and what specific operational metrics now carry the most weight?
The era of “fake it until you make it” has been replaced by a much more disciplined and honest dialogue between founders and investors. Startups are now entering funding rounds with a level of granular preparation that was frankly missing a few years ago, focusing on durability rather than just top-line expansion. I am seeing founders spend months refining their path to profitability, knowing that investors are no longer chasing growth if it comes at the expense of a sustainable business model. The metrics that carry the most weight today are those that prove a company can survive a multi-year exit timeline, such as high-quality recurring revenue and a clear reduction in burn rates. It is no longer enough to show a flashy user interface; you must demonstrate that your unit economics can withstand the pressure of a tightening market.
Traditional distribution relationships and regulatory hurdles make insurance particularly difficult to disrupt with software alone. How can new entrants bridge the gap between pure technology and actuarial complexity, and what role does legacy data play in proving a tool improves a combined ratio?
Insurance is an unforgiving business because it is built on decades of distribution relationships and a labyrinth of regulatory requirements that software alone cannot bypass. New entrants are finding that they must respect the actuarial complexity of the industry by integrating their tools into established workflows rather than trying to replace them entirely. The real winners are those who can prove their technology actually compresses the combined ratio, which is the ultimate yardstick for any insurance operation. Legacy data is the bridge here; it allows a company to show that its tool can predict claims more accurately or process them more efficiently than traditional methods. Without this historical context, a software solution is just a shiny toy that fails to address the underlying risk that defines the entire sector.
Foundation models evolve so quickly that specific AI features can become commoditized in months. How can a carrier leverage proprietary loss data or structured workflows to ensure their value remains defensible?
The pace of change in artificial intelligence is so rapid that a capability that felt like a competitive advantage six months ago can be completely commoditized by a new model release today. To build a moat that actually lasts, carriers must look beyond the foundation models and focus on the unique, proprietary data they have collected over years of operations. For example, a carrier that has embedded structured loss data from thousands of specific claims into their underwriting engine creates a defensive layer that a general-purpose LLM cannot easily replicate. The goal is to create a compounding advantage where the product gets meaningfully better as more data runs through it, regardless of which underlying AI model is being used. This shift from “we built an AI tool” to “we have a proprietary data loop” is what separates a thin narrative from a truly defensible business.
The current M&A environment highlights a divide between startups with strong unit economics and those struggling with high burn rates. What specific strategic assets are buyers prioritizing today, and how does the decision to buy versus build impact integration success?
We are currently witnessing a sharp divide between the “haves and have-nots” in the insurtech space, where strategic buyers are becoming incredibly selective about their acquisitions. Today’s buyers are prioritizing assets that offer immediate strategic value, such as specialized underwriting talent or unique distribution channels that would take years to build from scratch. The decision to buy rather than build is often driven by the need for speed, especially when a legacy carrier feels they are falling behind on AI capabilities. However, integration success now depends on a more sober valuation environment and a disciplined approach to how these new technologies fit into existing long-term value creation. Companies that were burning through capital without a clear plan are finding themselves looking for a place to land, often at a significant discount compared to those with proven unit economics.
Divergence in AI governance and data privacy between the U.S. and Europe creates friction for cross-border operations. How should companies decide where to house their data and build their product architecture?
The regulatory gap between the U.S. and the EU is creating a complex map for insurtechs that want to operate globally, forcing them to make hard choices about their product architecture very early on. Companies are having to decide which regulatory framework will shape their primary build, as the rules surrounding AI governance and data privacy are moving in meaningfully different directions. It is no longer feasible to assume that a product built for the American market will automatically comply with European standards, or vice-versa. Practical risk management now involves housing data in local jurisdictions and building modular architectures that can be adjusted to meet specific regional compliance requirements without stalling the entire development pipeline. Those who think about these governance questions now are the ones who will avoid costly, friction-filled pivots later when they try to scale across borders.
What is your forecast for the insurtech sector over the next eighteen months?
Over the next eighteen months, I expect the industry to move decisively past the experimental “proof of concept” phase and into a period defined by tangible ROI and substantive deployments. The conversation will shift away from the hype of AI narratives and focus squarely on who is making the transition to production successfully in areas like claims decisioning and underwriting. We will likely see a more active and pragmatic M&A cycle as the market consolidates around the players who can demonstrate an actual economic edge through automation and better risk decisioning. Ultimately, the gap between narrative and reality will widen, and the teams that win will be those who can show that their technology leads to faster processing, better customer experiences, and most importantly, improved loss ratios. The opportunity remains significant, but the market will only reward those who can prove their value in the cold light of financial performance.
