With the launch of AI models like Nano Banana Pro, which can generate flawless photorealistic images and text, the line between real and synthetic media is dissolving at an alarming rate. For corporate leaders and risk managers, this isn’t a future problem; it’s a present and escalating crisis. To navigate this new landscape, we spoke with Simon Glairy, a leading expert in insurance and risk management specializing in the corporate impact of AI. Our conversation explored the tangible attack scenarios enabled by this new technology, the critical first hours of responding to a deepfake crisis, the overlooked danger of private-channel fraud, and the technical and procedural shifts companies must make to survive in an era where seeing is no longer believing.
You mentioned that AI models like Nano Banana Pro have overcome weaknesses such as text generation. Could you provide a specific example of how this leap in quality changes the threat for a corporate client, perhaps by walking us through a potential attack scenario?
Absolutely. The game has changed entirely because attackers can now create a complete, believable context. In the past, you could often spot a fake by looking for garbled text in the background of an image or on a document. That heuristic is gone. Imagine this scenario: a senior executive in a company’s finance department receives an email with a highly detailed infographic attached, seemingly from the marketing division, outlining urgent funding needs for a new, confidential project. The infographic is perfect—flawless typography, accurate logos, complex data visualized beautifully. Moments later, they get a Microsoft Teams call. It’s the CEO, the video is perfect, and he says, “Did you see the project funding request? We need to move on that wire transfer within the hour before the opportunity closes.” With just 10 seconds of a CEO’s voice from a public earnings call, an AI can now replicate it perfectly. The combination of a visually perfect document and a convincing audio or video follow-up completely dismantles the human capacity for suspicion. The old red flags are simply not there anymore.
Your Deepfake Response Endorsement covers crisis management but not reputational damage. Can you walk me through the first 24 hours of a response for a client hit by a public deepfake, detailing the roles of forensic analysis and PR guidance during that critical time?
The first 24 hours are a frantic, coordinated race against viral spread. The moment a client reports a malicious deepfake—say, a video of their CEO making an inflammatory statement—the clock starts. The first priority is containment. Our legal support team immediately begins issuing takedown requests to social media platforms. Simultaneously, the forensic analysis team gets to work. They’re not just looking at the video for visual artifacts; they are digging into the technical guts of the file. They analyze for model fingerprints, which are subtle statistical patterns that generative models leave behind, almost like a digital signature. They also scrutinize the metadata for mismatches in timestamps or location data. While this is happening, the PR guidance team is drafting communications. The initial public statement can’t just be a flat denial; it has to be carefully worded to instill confidence while the forensic proof is being gathered. Within hours, we aim to have a preliminary forensic report that allows the company to state not just that the video is fake, but why they know it’s fake. This shifts the narrative from a corporate denial to a factual debunking, which is crucial for managing partner and investor communications.
You expressed greater concern for private-channel deepfakes, like payment fraud, over public ones. What specific metrics or claim trends lead to this view, and how does a fake video call scam differ from a traditional business email compromise in terms of detection and financial impact?
My concern comes down to a matter of scale and defense. A public deepfake, while damaging, is a “one-to-many” attack. To combat it, you need a relatively small number of key institutions—major media outlets, social platforms—to adopt detection tools. But private-channel deepfakes, which are essentially the next evolution of business email compromise, are a “many-to-many” problem. We already see that BEC is one of the most common and costly cyber insurance claims. Now, imagine that instead of a spoofed email, an accounts payable clerk gets a video call from a person they believe is a trusted vendor requesting a change to their bank account details. The emotional manipulation is far more potent. An email can be forwarded, analyzed, and scrutinized for small errors. A live, persuasive voice or face on a call triggers an immediate, human response that bypasses procedural checks. The financial impact is also more direct and often irreversible. It’s not about reputational damage down the line; it’s about cash being wired to a criminal account right now. To stop that, you need every single person who processes a transaction to have the tools and training to verify, and that’s a much harder mountain to climb.
You described cryptographic watermarking as the most promising detection method. What are the practical, step-by-step challenges to implementing this system across a large corporation, and how does it “flip the problem” from proving a fake to proving authenticity in a real-world investigation?
Cryptographic watermarking is powerful, but the rollout is a significant undertaking. The first challenge is hardware and software deployment. You need to ensure that every device used for official communications—from an executive’s smartphone to the webcams in conference rooms—is capable of cryptographically signing any image or video at the exact moment of capture. This creates an unbreakable chain of provenance. The second, and perhaps bigger, challenge is cultural and procedural. You have to train your entire workforce to not only use these tools but to actively look for the “verified” signature on incoming media before taking action. It has to become as ingrained as looking for the lock icon in a web browser. This is how it “flips the problem.” Right now, during an incident, our forensic teams are scrambling to find evidence that proves a video is fake. With a watermarking system in place, the burden of proof shifts. If an alleged video of the CEO comes in without a valid cryptographic signature, it is considered illegitimate by default. The investigative question is no longer, “Can we find a flaw in this fake?” but simply, “Does this piece of media have the verified signature?” If the answer is no, the investigation is over, and the content is dismissed.
Given that human judgment is becoming unreliable, what are the first three steps a company should take to establish the “formal verification procedures” you mentioned? Could you share an anecdote or example of how this would apply to a seemingly routine internal financial request?
Human intuition is now a liability, so process has to become the primary defense. The first step is to mandate out-of-band verification for any sensitive request. If you receive a video call, an email, or even a voice note asking for a money transfer or credentials, you must verify it through a completely separate, pre-established channel, like a direct phone call to a known number or a message on a different secure platform. The second step is to create a clear “authenticity” standard. This could be adopting the cryptographic watermarking we discussed, where official media must carry a verifiable signature. The third step is active training through simulation. Don’t just send memos; run drills. Send a well-crafted deepfake request to your finance team and see what happens. I saw a case recently that perfectly illustrates this. An employee received a voice call from his “manager” who sounded distressed and said he needed an urgent payment made to a new vendor. The voice was a perfect match. But the company had a formal procedure: any new vendor payment requires a verification code sent to the manager’s registered mobile device. The employee asked the “manager” on the phone for the code. The caller, of course, made an excuse. The employee held firm to the procedure, terminated the call, and prevented a six-figure fraudulent transfer. It wasn’t his ear that saved the company; it was his adherence to the process.
What is your forecast for the corporate security landscape in the next five years, as this “arms race” between generation and detection continues?
I believe we’re in a strange, turbulent transition period right now, but in five years, we will have settled into a new reality defined by zero-trust verification. Unverified media will be treated with the same deep suspicion as an unsigned contract or an unsecure website. I forecast a significant divide in preparedness. Large enterprises will have invested heavily in technical solutions like cryptographic watermarking and institutional AI detection, making them harder targets. Consequently, threat actors will shift their focus downstream to small and medium-sized businesses that lack the resources for these advanced defenses. The role of the human in the security chain will also fundamentally change. We will no longer be asked to be amateur detectives, trying to spot the fake with our own eyes. Instead, our primary responsibility will be to rigorously follow formal verification procedures, making the process, not our perception, the last line of defense. The technology will create ever-more-perfect illusions, but our survival will depend on a disciplined, collective refusal to believe what we see without technical proof.
