By Brian Long, CEO and Co-founder, Adaptive Security

In March 2025, a finance director at a multinational firm in Singapore joined what appeared to be a routine Zoom call with her senior leadership team. The CFO was there. Other executives appeared on screen. Everyone looked right. Everyone sounded right.

She authorized a $499,000 transfer before anyone flagged the fraud. Every face on that call was AI-generated.

This attack has a template. In early 2024, the same approach was used to steal $25.6 million from Arup, one of the world’s largest engineering firms, in a single afternoon. The method has spread widely, and the tools behind it have grown cheaper and easier to use every month since.

The organizations that have stopped these attacks all found the same answer: train your people to pause and verify before they act.

The Tools to Run This Attack Cost Almost Nothing

Cloning someone’s voice takes three seconds of audio and a free download.

Three seconds from a voicemail, a podcast appearance, an earnings call, or a LinkedIn video is all a current AI model needs to generate a fully interactive voice replica in real time. The model runs offline, requires no technical background and costs nothing.

Voice deepfake incidents rose 680% year-over-year in 2025. More than 100,000 attacks were recorded in the United States in a single year. The tools behind them are available on public repositories, carry no moderation, and run on standard consumer hardware.

What makes these attacks so effective is the preparation behind them. Before placing a single call, attackers map the target organization’s org chart, identify who holds financial authority, and study the standard approval workflow for wire transfers.

By the time the phone rings, the script is already written.

Your Security Stack Was Built for a Different Attack

A deepfake attack targets people directly. It arrives as a conversation: a familiar face on a Zoom screen, a voice that matches, an urgent request that sounds like any other.

Phone calls, video meetings, and voice requests sit outside everything your security stack was built to inspect.

The most sophisticated security stack in the world will not stop this attack if the employee fielding the call has never been trained to recognize it.

Finance Teams Are the Primary Target. Most Have Never Trained for This.

The targets in these attacks are the Controller, the accounts payable specialist, and the HR coordinator handling payroll. Deepfake attackers also call IT help desks with urgent credential reset requests, delivered in a voice that sounds exactly like the CTO. These employees have authority to move money and change account data.

The attack surface goes further than most security leaders account for. AI personas are now appearing in hiring pipelines, built from stolen LinkedIn profiles and designed to pass video interviews. Once hired, they get access to internal systems, source code and company data.

When I started speaking with CISOs about this threat eighteen months ago, about one in ten had seen a successful deepfake attack at their organization.

Today, that number is over half. Most of what I hear never makes the news. Companies have little incentive to disclose that a voice clone just cost them $500,000.

The Financial Scale of This Problem Is Growing Fast

Deepfake fraud losses exceeded $200 million in the first four months of 2025 alone. The full year of 2024 saw $359 million in total losses. Global deepfake fraud has now crossed $2.19 billion in documented losses, with the United States accounting for the largest share.

Among organizations that lost money to a deepfake attack, 61% reported losses above $100,000. Nearly 19% reported losses above $500,000.

These are only the losses that were reported. The actual total is far higher.

Running this attack at scale requires three things: a name, a three-second audio sample, and one employee without a verification protocol. That combination exists at almost every organization right now.

Building the Reflex Before the Call Comes

The companies that stop these attacks before money moves all do one thing: they train their employees to verify before they act, regardless of how familiar or urgent the request sounds.

Three controls cost nothing to put in place: a verbal passcode for any high-value financial request, a callback requirement on a pre-stored number before approving any wire transfer, and a standing policy that urgency in any financial request is a reason to slow down. Most organizations have none of these in place today.

In July 2025, an attacker used an AI-generated voice to impersonate Secretary of State Marco Rubio, sending voice messages via Signal to foreign ministers, a sitting senator, and a governor. None of the recipients acted on the messages.

The requests had arrived through an unofficial consumer messaging app, and that inconsistency alone was enough to trigger scrutiny. The incident was reported to the State Department before anyone responded. The attack failed because the recipients paused before acting.

A once-a-year compliance module will not build that kind of instinct. Deepfake audio is designed to sound exactly right. An employee who has never experienced a voice clone attack has nothing to draw on when their CFO calls requesting an immediate transfer. The reflex has to be built before that call comes.

At Adaptive Security, we simulate AI-powered deepfake attacks across voice, SMS, email, and video. When an employee receives a call from a cloned version of their CFO requesting an urgent wire transfer, it is a test.

If they fail, the platform adjusts their risk score and delivers personalized training tied directly to that scenario. Security teams get a clear, real-time view of where they are most exposed and can act before an attacker does.

The gap between a synthetic voice and a human one is closing faster than most organizations are preparing. The teams running simulations and building verification habits today are the ones that will catch the call before the transfer clears.

Three seconds of your CEO’s voice is already on the internet. Make sure your team knows what to do when it calls.

To learn how Adaptive Security helps organizations prevent AI-powered social engineering attacks, visit adaptivesecurity.com.