Featured Article

The flawless fraud of real-time deepfakes

How fraudsters are using real-time deepfake technology to commit their crimes and what must be done to fight back



The use of real-time deepfakes, where fraudsters use video or audio generated with artificial intelligence to replicate someone’s voice, image and movements as the scheme is happening, is the latest way that fraudsters are perpetrating a host of frauds. In this article, the authors describe real-time deepfake schemes and what can be done to combat them.

In February, an employee of a multinational company thought he was logging into a virtual meeting with his organization’s chief financial officer (CFO) and several of his co-workers. At first, he’d been skeptical of the meeting. The initial message he received about it seemed like a phishing email since it mentioned a highly important transaction that needed to be carried out in secrecy. He set his fears aside once the meeting started; everyone else on the call appeared to be people he’d seen before. But nobody else on the conference call was an actual person; they were all elaborate real-time deepfake video recreations in which fraudsters used artificial-intelligence (AI) to replicate the voices, images and movements of people as the scam happened. During the call, the fake CFO instructed the employee to transfer $25 million to multiple Hong Kong banks and multiple accounts belonging to the criminals. The employee was in Hong Kong but the fabricated video-meeting participants were ostensibly in London. It wasn’t until later when the employee checked with the corporation’s head office that he learned he’d been the victim of a scam. (See “Finance worker pays out $25 million after video call with deepfake ‘chief financial officer’,” by Heather Chen and Kathleen Magramo, CNN, Feb. 4, 2024.)

In April, fraudsters used a real-time deepfake video of Tesla CEO Elon Musk to defraud a South Korean woman out of INR 40 lakhs (approximately $50,000). In this classic romance scheme with a high-tech twist, the victim believed Elon Musk had added her as a friend on Instagram. In a subsequent deepfake video, he told her he loved her and convinced her to deposit the money in a South Korean bank account with promises that she’d get rich. (See “South Korean woman loses £40k in Elon Musk romance scam involving deepfake video,” by Shweta Sharma, Independent, April 24, 2024 and “Drake’s fake Tupac, a $50,000 Elon Musk romance scam, and AI-generated racist tirades: Deepfakes are terrorizing society,” by Jasmine Li, Fortune, April 29, 2024.)

These are just a few examples of frauds committed with real-time deepfakes — the latest way that fraudsters are perpetrating schemes with AI technology. Deepfake is an umbrella term often used in news media to encompass all sorts of AI-generated schemes, including those carried out with prerecorded deepfakes and those occurring in real time. In this article, we focus on the current crop of deepfake schemes — real-time deepfakes, which are generated as the scheme occurs. (See “Real-time deepfakes are a dangerous new threat. How to protect yourself,” by Jon Healey, Los Angeles Times, May 11, 2023.)

Because of the ever-advancing technology of AI and machine learning (ML), fraudsters can have live interactions with their victims to impersonate business executives to authorize transactions; impersonate family members in need of help in “grandparent” scams; portray public figures conveying misinformation; and deceive people for their money in romance scams. Real-time deepfakes present a unique challenge for organizations and individuals alike. How do we fight back against something so convincing that we don’t even know we’re being defrauded? In this article, we’ll examine the dangers that real-time deepfakes pose to organizations and individuals, and what’s being done to address a fraud that’s literally too good to be true.


For full access to story, members may sign in here.

Not a member? Click here to Join Now. Or Click here to sign up for a FREE TRIAL.