Friend or fraud? What is a deepfake and how does it impact fraud?

December 19, 2023

Part 1

What is a deepfake anyway? 

In May 2019, a Youtuber called “ctrl-shift-face" uploaded a video interview of actor Bill Hader doing celebrity impressions. At first glance, it appears Hader is using his comedy chops to impersonate famous people. Hader even launches into the famous “Arnold Schwarzenegger” voice, inflecting the Austrian superstar’s European drawl. While a bit dated, this example helps illustrate when deepfakes really breached mainstream pop culture. Over the years, deepfakes have become even more popular.

Unlike other common impersonations of Ahnold, what’s unique about this video isn’t Hader’s impression but what happens as he impersonates Arnold’s voice. His face eerily becomes the former governor’s, as if he was wearing a digital mask. It was as if Schwarzenegger was sitting in the studio talking to the audience.

The video is entertaining, but there’s an eerie quality about how similar Hader looks to Schwarzenegger. Comments underneath the synthetic media clip say things like “After he started looking like Arnold, I forgot what he actually looks like,” “I thought I was hallucinating,” and “he even looks like him lol.”

The term “deepfake” often refers to a doctored video — like the Hader/Schwarzenegger impression — that uses Artificial Intelligence (AI) and facial recognition technology to mimic facial expression and characteristics of one person and superimpose it on another person's body. And video isn't the only deepfake medium out there. Voice, image, and even biometric deepfakes also exist. Entertaining in this light, the ability to create fake and convincing facsimiles of other people serious implications for the real world, and identity lifecycles.

 

Deepfakes and have become a popular digital tool to create synthetic media

Consumers, businesses, and governments alike are debating the implications of deepfakes especially after its recent uses in nefarious purposes like fake news and deceptive media that could potentially affect national elections. Here’s our own SVP of Identity, Chris Briggs with an interview about staying ahead of the deepfake curve.

As Briggs states, and as this blog series helps make clear, “It takes a multi-layered strategy to defend against a lot of the emerging AI attack vectors” like deepfakes. Companies that take the time to understand deepfakes can better “build countermeasures to combat new approaches before they become full-blown security holes.”

Learn more about deepfake solutions

In this deep dive on deepfakes, we discuss:

  • What is a Deepfake?
  • Where did Deepfake technology come from?
  • How do you make a Deepfake?
  • How do you spot a Deepfake?
  • Can deepfakes be used in new account opening and account takeover fraud?
  • How does a company fight Deepfakes in 2024?

A deepfake is a technology that typically belongs to fraudsters. They create the fake media using machine learning and artificial intelligence algorithms to alter videos, emulate forgeries of people doing or saying malicious things, creating convincing synthetic audio, and other forms of fake content where humans are present. Con artists can even generate deepfakes from existing images to create places, people, and things that are entirely synthetic

People have used deepfake technology for a variety of purposes from fun to malicious. For example:

  • Biometrics like facial expressions are generated and superimposed onto another person’s body in fake videos
  • Human voices matching the timbre and pitches of celebrities make like Jay-Z singing Billy Joel in deepfake audio recordings
  • Politicians saying things they’ve never said before.

As the technology gets better, fraudsters will likely continue to use malicious deepfakes for cybercrimes and corporate espionage.

 

Where did deepfake technology come from?

The first known deepfakes were likely AI-generated videos published at the start of 2017 by a Reddit user to the platform. Today, the user has been credited as deepfakes’ creator, bringing the overall practice into public view.

Deepfake creators often used open-source image libraries like Google image search, social media websites, stock photo databases, tensorflow, and YouTube videos to create a machine-learning algorithm which allowed the user to insert people’s faces onto pre-existing videos frame by frame.

Although there are glitches and obvious catches a user can notice, the videos are quite believable and are only getting more convincing as more users continue to experiment. At the time, the deepfake creator even created and released an app called “FakeApp,” making it easier for even basic, less tech-savvy users to create fake content from funny videos to those with more malicious aims. Today, there are likely hundreds of deepfake generators.

 

How do you make a deepfake?

Deepfake creation doesn’t have the lowest bar to entry, but it’s not super difficult, either, especially given the proliferation of DIY tools. Bad actors likely have some combination of a super-powered computer, artificial intelligence and machine learning programs, and hundreds of thousands of images of selected people.

 

Here’s the process for a deepfake video:

  1. First, a user runs thousands of facial pictures of two selected people through an encoder – an artificial intelligence algorithm that uses a deep, machine learning network
  2. The encoder compares the images of the two faces, compressing them into shared common features.
  3. A second AI algorithm called a decoder recovers the faces from the compressed images. Multiple decoders may be used: one to find and analyze the initial person’s face, the other to do same to the second person’s face.
  4. To perform the face swap, a user feeds the encoded images of person A’s face into the decoder trained on person B. The decoder then reconstructs the face of person B with the expressions and orientation of face A and vice versa. For the more convincing fake content (or malicious deepfakes), this will be done on thousands of frames.

Another method to create deepfakes use a generative adversarial network (Gan). The notable difference here is the Gan creates an entirely new image/video that looks incredibly real, but is entirely fake. Here’s how it works:

  1. A Gan pits two artificial intelligence algorithms against each other. The first algorithm, known as the generator, is fed a noise signal and turns it into a fake image.
  2. This synthetic image is added to a discriminator – another algorithm that’s being fed a stream of real images
  3. The two components (generator and discriminator) are functionally adversarial, and they play two roles against each, like a “forger and a detective” described by Shen et al, students who used a GAN to create deepfakes in a study at UCSD.
  4. The process is repeated countless times with the discriminator and generator both improving. After a while, the generator will start producing a realistic image, or deepfake. This could be a person, place, or thing.

To take a deep fake to the next level, Tom Cruise’s fake face must have a fake Tom Cruise voice. Layering audio onto deepfakes typically happens in one of three ways.

  1. Replay-based deepfakes, which use a microphone recording[1] or cut-and-paste techniques to cobble a new audio string together from existing voice snippets.
  2. Speech synthesis, which often uses a text-to-speech system to create real-sounding audio from a written script.[2]
  3. Imitation-based voice deepfakes, which transform an actual speech clip from one subject to make it sound as if another subject is saying it.[3]

 

Can deepfakes be used for fraud? How?

Instagram users share more than 1 billion images daily on the platform.[4] Google likely has even more selfies of people in the petabytes it stores. As a result, most people have some type of digital fingerprint – whether that’s a LinkedIn account profile picture, or family photos shared on Facebook (these items encompass a behavioral biometric profile). All these pictures are potential inputs for AI to begin creating convincing Deepfakes for deceptive media.

Deepfakes have been used for celebrity-bating, political dirty tricks, pornography, extortion, fraud, revising exsting forms of entertainment and art, and more innocuous meme creation in social media circles. One example, shows how con artists can leverage the technology to blackmail even high-ranking executives.

The CEO of an energy company believed he was on the phone with his boss and following orders when he immediately transferred €220,000 (approx. $243,000) to the bank account of a Hungarian supplier. It turns out, the audio on the phone was actually an audio deepfake the fraudsters had created.

As the technology improves and becomes commoditized, it could be used for identity-theft and other cybercrimes including fraudulent account opening and account takeover. Bad actors can use deepfakes for various types of fraud, including:

  • New account opening fraud: Using the methods to create a deepfake described above, a fraudster could look on a social network and collect hundreds of images to create a deepfake image or audio, and add that to a synthetic identity: an amalgamation of stolen identity info. If it’s good enough, the fraudster could use the compelling deepfake and identity to open a new account at bank, take out hundreds of thousands in loans, and bust out without paying interest leaving the bank with monetary losses. Tough break.
  • Account takeover fraud: In 2022 alone, nearly 2,000 data breaches impacted hundreds of millions of individuals.[5] Some of those data breaches might have included Biometrics databases perpetrators can create fakes that mimic biometric data and trick systems that rely on face, voice, vein or gait recognition.
  • Phishing scams: Modern phishing attempts have incorporated fake video messages, which are often personalized and tailored to the target. Using deepfake technology, scammers can generate video clips of trusted figures, celebrities, or even family members, asking the recipient to undertake certain financial actions, making the deceit seem all the more authentic. Like the example of the unwitting CEO and the fake Hungarian supplier, fraudsters also use voice-imitation techniques to simulate calls from trusted entities. These fake audio calls can be convincing enough to persuade individuals to share sensitive information or transfer funds to unauthorized accounts.
  • Impersonation attacks: Again, the fake Hungarian supplier comes into the spotlight. Like that fraudster, others use deepfakes to mimic corporate executives or even high-ranking government officials. Successful fakes can trick employees into divulging sensitive information or money. In the case of government officials, this information passed to bad actors may even be considered espionage.
  • Synthetic identity theft. Criminals may even create entirely fake personas. They do so by generating entirely new, fictitious identities complete with photos, voiceprints and even background stories by harvesting pieces of legitimate identity information and cobbling them together. These synthetic identities can then be used to open bank accounts, apply for credit cards, or even commit large scale financial fraud, making it hard for authorities to trace back to a real individual.

Scam artists will use these various forms of deepfakes to commit fraudulent financial transactions. By manipulating audio or video to mimic an actual person (or fabricate a new identity), bad actors can forge verbal or visual approvals for everything from a wire transfer to a loan application. They’ll use deepfakes and other forms of fraud for financial gain, to damage to reputations or to sabotage a competitive company or even government.

In part two of this blog post, we’ll examine how organizations can spot and even set up better safeguards to prevent against deepfakes and other types of fraud.

 

PART 2

What are the solutions for deepfakes?

Detecting deepfakes is a hard problem. Poorly done or overly simplistic deepfakes can, of course, be detected by the naked eye. Some detection tools can even spot more faulty characteristics. But artificial intelligence that generate deepfakes are getting better all the time, and soon we will have to rely on deepfake detectors to flag them to us. 

To counter this threat, it’s important to make sure companies and providers use two- or multi-factor authentication. Multi-factor authentication approaches layer various forms of verification on top of one another to create more obstacles for fraudsters. For example, facial authentication software may include certified liveness detection that provides an additional safeguard against deepfakes. And because sophisticated deepfakes can spoof common movements like blinks and nods, authentication processes must evolve to guide users through a less predictable range of live actions.

 

Detecting deepfakes in financial scams

In an era where deepfakes are increasingly being used in financial scams, safeguarding against them is increasingly important. Fortunately, as technology advances, methods to detect these scams are also evolving. Organizations can employ the following strategies to detect and counter deepfakes, which will be especially important in the financial services realm.

 

Visual analysis

Deepfakes, while sophisticated, often display inconsistent facial features. AI struggles to replicate minute facial expressions, eye movements, or even the way hair and facial features interact. Algorithms that generate deepfakes can also show unnatural lighting and shadows. Visual analysis may uncover shadows inconsistent with the light source or reflections that do not align correctly.

 

Verification

While deepfakes can replicate voices, they might also contain unnatural intonations, rhythms or subtle distortions that stand out upon close listening. Voice analysis software can help identify voice anomalies to root out deepfakes. Implementing authentication processes that layer codes or follow-up questions on top of voice commands can help ensure the request is genuine. Where files are concerned, automated document-verification systems can analyze documents for inconsistencies, such as altered fonts or layout discrepancies, that might indicate forgery.

 

Multi-factor authentication

The name of the game is layered security. Adding facial, voice, or other biometric recognition adds another hoop for a scammer to jump through even if they manage to impersonate a voice or face. Device recognition can help verify that requests are from previously authenticated or recognized devices is also an option for multi-factor authentication.

 

Blockchain and digital signatures

Blockchain technology promises an immutable record of all transactions. By using digital signatures and blockchain ledgers, organizations can implement provenance tracking for financial transactions to ensure the authenticity and integrity of financial instructions. Any unauthorized or tampered transaction would lack the correct signature, flagging it for review.

Whatever approach organizations take, layering various authentication factors on top of one another is paramount for preventing deepfake-enabled fraud. The other key to robust protection against deepfake is to implement continuous verification. Rather than verify identity once at sign up, organizations must integrate verification measures during the entire customer experience, even after their account has been set up. Some companies routinely invoke identity verification (for strong security, it is important to understand authentication vs. verification) whenever a dormant account suddenly becomes active for high-value transactions, or when passive analytics indicate elevated fraud risk.

One way to do this is to request a current selfie, then compare it to the biometric data stored from onboarding (where storage is allowed by regulations and permissioned by the customer). In very risky situations, you could also request a new snapshot of the originally submitted government issued physical ID and take a few seconds to verify the authenticity of the document and compare the photo on the ID against the selfie.

The good news is governments, universities, and tech firms are all funding research to create new deepfake detectors. And recently, a large consortium of tech companies have kicked off the Deepfake Detection Challenge (DFDC) to find better ways to identify manipulated content and build better detection tools.

 

Machine learning and AI automate, strengthen anti-fraud efforts

Man typing while sitting on a couchWhen combining manual scrutiny with automated systems to detect and prevent fraud, AI and machine learning-infused solutions will further bolster anti-fraud efforts. Many authentication systems are trained in pattern recognition and anomaly detection. These solutions are better and more efficient at scanning files and authentication attempts for nuances that humans alone will struggle to recognize. Over time, these tools’ detection capabilities should improve as they learn from more data. It’s worth diving deeper into how AI and ML impact anti-fraud efforts.

Machine learning has become an indispensable tool in detecting deepfakes. Fraudsters have learned to deceive traditional methods of detection, which often rely on human expertise. Compared to traditional detection methods, however, machine learning models can offer:

  1. Automated analysis: Models can quickly analyze vast amounts of video and audio data, identifying anomalies at speeds beyond human capabilities.
  2. Pattern recognition: Over time, machine learning models can recognize patterns characteristic of deepfake production algorithms, thus identifying manipulated content.
  3. Continuous learning: As new types of deepfakes emerge, machine learning models can be retrained and adapted, ensuring they remain effective over time.

 

AI also helps firms stay ahead of evolving deepfake technologies

Unfortunately, criminals’ abilities grow as technology capabilities evolve. Investing in AI helps financial institutions develop more advanced detection tools, as well as stay abreast of emerging threats. AI-enabled tools can simulate deepfake attacks and test detection systems, shoring up vulnerabilities and training team members on how to better recognize fraudulent actions.

By collaborating with technology companies offering AI-enabled tools, financial services and other firms can broaden their deepfake knowledge base to spread their anti-fraud blanket even farther. Spreading anti-fraud defenses farther serves a second purpose: educating the public. Firms and technology providers that are well-versed in potential risks of deepfakes can provide PSAs and other collateral to help better inform prospective customers about their own risk.

AI can also train datasets specific to financial fraud and identity theft to really stay a step ahead of bad actors. This means feeding AI algorithms datasets tailored to financial fraud and identity theft scenarios, such as:

  1. Real-world data collection: Financial institutions can use instances of past fraud attempts to train models on actual threats faced by the industry.
  2. Synthetic data generation: Creating datasets is a resource-intensive task and not always an easy feat. To bolster real-world datasets, algorithms can drum up synthetic examples of potential fraud scenarios, ensuring a comprehensive training environment for models.
  3. Continuous updating: As fraud methods evolve, it's essential to continually update the training dataset to reflect new tactics and techniques employed by fraudsters. AI can perform this task much more efficiently than humans alone can.

 

Legal responses to deepfake financial crimes require ethical considerations

Some jurisdictions have started drafting or amending legislation to address deepfake-related crimes, especially when they lead to financial fraud. Penalties for creating or disseminating malicious deepfakes can include imprisonment and hefty fines. Elsewhere, legal firms collaborate with tech companies for forensic analysis to verify digital content, and financial services organizations have enhanced their identity-verification protocols with processes like Know Your Customer (KYC).

Though these efforts are aimed at thwarting deepfake and other types of fraud, they carry with them privacy and other ethical concerns. As private and public sector organizations move forward with anti-fraud efforts, they’ll have to ensure they maintain strict data privacy and security protocols when they collect data, to avoid unauthorized use of that information, data breaches, anonymity infringement or consent issues.  

Regardless of how lawmakers and organizations approach anti-deepfake fraud, there is a need for clear regulations about what constitutes informed consent and correct data usage in the age of deepfakes.

 

How to proceed in the age of deepfakes

Firms in every industry can take measures to safeguard against deepfake and other types of fraud. Informing employees and customers about risk, implementing ongoing identity verification and constant transaction monitoring are common ways of buttressing security against novel forms of fraud. 

Holding awareness sessions, training events with real-world examples and updating employee and customer bases about emerging types of fraud are methods firms can educate employees and individuals linked to their organization about fraud and how to identify it.

Strong authentication methods, such as multi-factor authentication (MFA), biometric verification (whether behavioral, voice, or any other form of biometrics) add layers of security onto every interaction with the firm’s apps or services.

Financial services firms must also regularly monitor financial transactions, something they likely do anyway. But monitoring for fraud, such as deepfake fraud, may require additional processes, such as automated alerts, more frequent statement reviews, internal audits and even more regular contact with clients regarding potentially questionable or anomalous transactions.

All of these efforts are simply measures that should augment existing cybersecurity firewalls. Organizations that leverage anti-phishing software and firewall and intrusion detection systems, along with VPNs and regular software updates stand a much better chance should fraudsters come knocking.

 

Multifaceted attack vector requires multifaceted protective approach

Deepfakes have been a growing concern in many fields, especially as the technology to create them becomes more advanced and accessible. In the realm of financial fraud and identity theft, deepfakes have been used in various ways. The safeguards above will help firms feel confident they are doing their utmost to protect against fraud. And because deepfakes are a multifaceted threat to financial security and trust in the digital age, they require a multifaceted approach to protect against. Vigilance, ongoing research into detection methods, and broad-based awareness campaigns are essential to counteract this emerging challenge.

 

Click here for more information on deepfakes, other emerging security threats, and solutions that can help you 


In text citations:

[1] https://dl.acm.org/doi/10.1145/3351258

[2] https://arxiv.org/abs/2106.15561

[3] https://link.springer.com/chapter/10.1007/978-3-030-61702-8_1

[4] https://photutorial.com/photos-statistics/

[5] https://www.statista.com/statistics/273550/data-breaches-recorded-in-the-united-states-by-number-of-breaches-and-records-exposed/

Other sources: