How accountants can combat the rising threat of deepfake fraud

Faked, live video and audio of executives can fool employees and cost companies millions, but accountants can take steps to mitigate the risk.
Image of AI-generated woman's face.

IMAGE BY RYTIS/ADOBE STOCK

Imagine you receive an email invitation from your boss to attend a video call. The email is from the headquarters office regarding a “confidential transaction”. You accept the invitation and join the call, in which the company CFO tells you to transfer 200 million Hong Kong dollars ($25.6 million) across 15 transactions, using five Hong Kong banks. And you do it because the people in the video call look and sound exactly like you remember them.

Only later do you realise you fell victim to a deepfake โ€” the people you recognised in the video call were digital clones of your colleagues that fraudsters created with the help of artificial intelligence (AI). And the destination of the money transfer was the fraudsters’ pockets.

This isn’t a hypothetical example of a deepfake, the potent, fast-spreading, and high-tech form of financial scam. It happened to a finance professional working for British multinational design and engineering firm Arup in Hong Kong. The company reported the incident to Hong Kong police in January 2024.

Shortly after news outlets across the world made the incident public, Deloitte polled more than 1,000 executives on their experiences with deepfake attacks. Nearly 26% said their organisation had experienced one or two in the previous 12 months. And more than half of the respondents (51.6%) expected the number and size of deepfake fraud attacks targeting their organisations to increase over the subsequent 12 months.

“We hear about more and more of these on a regular, everyday basis,” said Jonathan T. Marks, CPA/CFF/CITP, CGMA, a US-based partner in BDO’s Forensic Accounting & Regulatory Compliance practice. Finance and banking, he said, are key targets because of their access to money and data.

In November 2024, the increasing number of reported deepfake fraud attacks prompted the US Treasury Department’s Financial Crimes Enforcement Network (FinCEN) to alert financial institutions and help them identify the fraud schemes. (See the sidebar, “The Year Deepfakes Became Commonplace,” at the end of this article.)

Other authorities and experts are raising the alarm, too.

“If people are not paying attention to this, it might be too late when an incident happens โ€” especially with the speed at which information can get moved, funds can get transferred,” said Satish Lalchand, a US-based principal in Deloitte Transactions and Business Analytics LLP who specialises in fraud detection and AI strategy.

But there are steps finance professionals can take to mitigate the deepfake-fraud risk. The Deloitte poll, for example, highlighted what respondents said they have done to prepare. (See the graphic, “Tools to Prepare the Workforce,” below.)

Tools to prepare the workforce

Around 1,500 executives answered the following question in Deloitteโ€™s poll: What is the primary tool your organisation is using to prepare its workforce for deepfake-related fraud?

Source: Deloitte, Generative AI and the Fight for Trust.

Forms deepfakes can take

Can you spot a deepfake?

It’s sometimes possible to identify a deepfake by observing it closely. AI-generated images may have incongruities, such as a person with too many fingers, or a nonsensical layout for a building. Synthetic video or audio of a person may simply feel off in a way that’s hard to pinpoint.

“It’s lighting, it’s cadence, it’s these certain things” that can betray an imposter, Marks said.

Deepfakes are digital imitations of images, videos, documents, and voices that fraudsters create with generative AI. They gained widespread public attention through head-spinning viral videos โ€” like an eerily accurate digital clone of Tom Cruise, complete with convincing facial expressions and body language.

And deepfakes are advancing just as fast as AI. The technology is becoming more widespread and easier to deploy. The advances can make the tells that something is a deepfake more subtle, and scammers can mask flaws by transmitting the video in lower resolution or by inserting digital noise and compression artifacts (distortions that can appear when video is compressed).

The increasing power and accessibility of deepfake technology also means that attackers can pursue a wider range of victims.

Previously, a convincing fake might have been feasible only if the subject person was featured in a large amount of audio and video recordings, but experts say the fakes now can be generated with far less source material. That could make it easier to target executives at smaller companies, even if there isn’t much video or audio publicly available.

Deloitte’s Center for Financial Services estimated that AI-generated fraud could reach around $40 billion in losses in the US by 2027, a 32% annual rate of increase from $12.3 billion in 2023.

“Gen AI tools have greatly reduced the resources required to produce high-quality synthetic content,” FinCEN reported in its November alert.

For example, in March 2024, OpenAI, the company that developed ChatGPT, reported that its voice engine technology could generate “natural-sounding speech that closely resembles the original speaker” with just a 15-second sample.

OpenAI acknowledged the potential abuse of the technology while also highlighting legal and technological guardrails it had put in place. But generative AI advances aren’t just coming from tech giants anymore. As the open-source model DeepSeek has shown, small outfits can create advanced generative AI models.

In other words, you won’t catch every deepfake simply by looking for glitches. But other techniques may still help.

Clay Kniepmann, CPA/CFF/ABV, a US-based forensic, valuation, and litigation principal at Anders CPAs + Advisors, advised keeping an eye out for odd behaviour, such as a potential impersonator’s lack of knowledge about the real person.

“It’s knowing who you work with,” Kniepmann said. “Know who that approver is, what their personality is like.”

More broadly, the most reliable defence may be to stay aware of the more universal signs of fraud โ€” unusual and unexpected requests, false impressions of urgency, and stories that don’t stand up to scrutiny.

Steps finance can take to stop deepfake fraud

One of the voices sharing insights on deepfakes is Rob Greig, Arup’s global chief information officer, who spoke about lessons learned to the World Economic Forum. Following the deepfake attack, Arup took a deep dive into assessing whether the incident put systems, clients, or data at risk, Greig said in a video interview. The company determined none were compromised in the attack.

“People were persuaded, if you like, deceived into believing they were carrying out genuine transactions that resulted in money leaving the organisation,” he said in the WEF interview.

Deception is nothing new. The technology to create deepfakes is, Greig said, adding, “Deepfakes sounds very glamorous, but what does that actually really mean? It means someone successfully pretended to be somebody else, and they used technology to enable them to do that.”

Instead of counting on employees to spot fakes, organisations should consider investments in training and technology to deflect attacks and limit their damage. Kniepmann, Marks, and Lalchand suggested the following steps:

Enhance anti-fraud training and culture

Companies already should be training employees to identify fraud attempts that occur through more common methods, such as email scams.

Deepfakes can be added to this curriculum โ€” making personnel aware of the possibility that a phone call or other contact could be fake. Companies also could start to test how employees respond to mock fraud attempts involving deepfakes, Kniepmann suggested โ€” similar to how many organisations send mock phishing emails.

“It’s training on the possibility of those things,” Kniepmann said. “Conduct test attack, and see who’s passing, who’s failing, who’s falling for this.”

In general, employees should be reminded to trust their instincts, maintain a sceptical stance, and seek backup in any suspicious situation, Marks said. “When you lower your level of scepticism, the white-collar criminal will take advantage of you.” Creating an anti-fraud culture, he said, is a never-ending endeavour.

“Scepticism is not something that you set and forget,” Marks added. “It’s constantly changing and acclimating based on the circumstances. You just can’t get complacent.”

Assess policies for large monetary and data transfers

Companies should examine their internal controls over who can authorise major financial and data transactions and how that happens. “I am all for any reason to revisit your internal controls,” Kniepmann said.

Controls should detail who has the authority to initiate and approve transactions. Companies also may consider setting policies about communications: Perhaps requests for money and data transfers should come only through more secure channels.

Marks encouraged the use of platforms with two-factor authentication, making it harder for AI fakes to worm their way into a conversation.

Encourage the use of human connection and ‘tradecraft’

One of the best defences against AI may be human connection. If employees know each other well, they’re more likely to spot a deepfake.

“Fraudsters often exploit publicly available information [to steal an identity], but they can’t replicate the nuance of personal shared experience,” Marks said.

Employees can even be trained to specifically check and challenge the authenticity of their conversational partners โ€” a practice Marks refers to as tradecraft. Tradecraft can include checking knowledge of business and personal topics.

Companies also can use secret codewords to ensure an imposter hasn’t entered the conversation.

“You’re adding a little friction, but you’re layering on security,” Lalchand said.

Embrace AI-detection technology

Tech companies are racing to deploy technologies that can detect AI fakery. Some tools incorporate directly with video platforms like Zoom, while others can check that a connection is secure and appears to trace back to the appropriate geographical location, Lalchand said.

Businesses also are fighting identity fraud more broadly with new identity verification platforms that tap into multiple sources of data so they can’t be defeated by a single fake document.

“The real key here is to stay proactive, investing in detection tools, in validating authenticity,” Marks said.

A growing number of vendors offer tools to analyse pre-recorded video files, among them DuckDuckGoose AI, Deepware Scanner, and Google SynthID. One of these tools, Reality Defender, can even spot fakes in real-time videoconferences.

Assess your vulnerabilities

While there are many strategies for fighting deepfakes, they should all begin with an assessment of a company’s fraud risk profile, the experts agreed.

“White-collar criminals are profiling you, and you should be profiling them,” Marks said. “For an organisation to really get started, you have to do a risk assessment.”

In the face of a constantly changing threat, every organisation should be reinforcing its defences against fraud of all sorts.

“You’re only as strong,” Lalchand warned, “as the weakest link in your organisation.”


The year deepfakes became commonplace

The number of detected AI-generated deepfakes increased fourfold from 2023 to 2024, accounting for 7% of all fraud attempts in 2024, according to an analysis by Sumsub, a UK company that makes identity verifying
technology.

The country with the largest increase in deepfake attacks was South Korea, but the Sumsub analysis also found attacks significantly spiked in the Middle East, Africa, and Latin
America and the Caribbean.

Deepfake examples reported worldwide in 2024 included:

  • A video of her boss that tricked a finance employee in Chinaโ€™s Shaanxi province into transferring 1.86 million yuan ($258,000) into an account fraudsters had established. According to the China Daily, a phone call to her boss to verify the transaction revealed the fraud and police were able to arrange an emergency freeze of much of the money
    transfer.
  • Images that suggested Hurricane Milton flooded Walt Disney World in Orlando, in the US. The images were shared by a Russian news agency and circulated widely on social media.
  • Fake images of earthquake survivors in Turkey and Syria that scammers circulated on social media to channel donations into their PayPal accounts and cryptoasset wallets, the BBC reported. The magnitude 7.8 earthquake in February 2024 killed more than 50,000 people, damaged at least 230,000 buildings, and left millions without heat or water.

Andrew Kenney is a US-based freelance writer. To comment on this article or to suggest an idea for another article, contact Oliver Rowe at Oliver.Rowe@aicpa-cima.com.


LEARNING RESOURCE

Fraud Risk Management Guide, 2nd Edition

An update to the original version released in 2016, the second edition addresses more recent anti-fraud developments, revises terminology, and adds important information related to technology developments โ€” specifically data analytics. It is intended to give organisations of all sizes across all industries the information necessary to design a plan specific to the risks for that entity.

PUBLICATION


MEMBER RESOURCES

โ€œRecruitment Fraud: What You Need to Knowโ€, FM magazine, 24 January 2025

 โ€œAI and Fraud: What CPAs Should Knowโ€, Journal of Accountancy, 1 May 2024

 โ€œHow to Keep Businesses Safe on Social Mediaโ€, FM magazine, 28 September 2023

JofA Artificial Intelligence (AI) Coverage

Up Next

FP&A stimulates economic confidence amidst trade shocks

By Steph Brown
September 10, 2025
FP&A capabilities continue to increase in importance for finance teams, partly through the ability to predict emerging tariff developments.

Related Articles

Shadow AI emerges as significant cybersecurity threat