Deep Fakes: The Next Big Threat

A number of mobile apps give anyone with a smartphone and a few minutes of time on their hands the ability to create and distribute a deep fake video. All it takes is a picture of, say, yourself that you’d swap with an actor in a movie or a television show. The apps do the hard part by recognizing the facial structure of the actor, so when your image is added to the movie or show, it is a pretty seamless recreation.

Chances are no one will actually mistake you for Brad Pitt or Reese Witherspoon, but what these apps—downloadable from the Apple App Store or Google Play—do is show how simple it is for the average person to make a fake image look legitimate. And while these apps are meant for entertainment purposes, deep fakes are becoming a new category of cybercrime that are not just a problem for networks and data, but could also have a life-or-death impact.

The potential for deep fakes in cybercrime is dire enough that the FBI released a warning in March 2021, stating “Foreign actors are currently using synthetic content in their influence campaigns, and the FBI anticipates it will be increasingly used by foreign and criminal cyber actors for spearphishing and social engineering in an evolution of cyber operational tradecraft.”

During a webinar offered by Cato Networks, Raymond Lee, CEO of FakeNet.AI, and Etay Maor, senior director of security strategy at Cato Networks, showed photos and played audio recordings that were of both real people and fakes, proving how difficult it is to tell fact from fiction.

Showing Up on the Dark Web

With increasing frequency, deep fakes are showing up on the dark web; a clear sign that threat actors see the technology as a promising new income stream. There is a burgeoning marketplace for products that create deep fakes, and within dark web chatrooms there are conversation threads dedicated to outlining the best methods for creating deep fakes for use in cybercrime. There is growing interest in deep fakes by nation-state actors and political extremists, as well, to use the technology to influence public discourse and spread propaganda.

Chatter surrounding deep fake methodology also has moved beyond the dark web to alternative social media sites where disinformation and misinformation are regularly spread, according to research from Recorded Future.

“In the future, we believe that this otherwise relatively benign community can serve as a basis for individuals to venture into illicit criminal activity using learned deep fake skills,” the report stated.

The Next Generation of Ransomware

We can expect to see ransomware paired with deep fakes for more personalized attacks, too. Dubbed RansomFake by Paul Andrei Bricman, the co-founder of REAL (Registrul Educațional Alternativ), this attack uses a deep fake video that appears to show someone engaging in incriminating or compromising activities and then threatens the victim with widespread distribution of the video if a ransom is not paid.

The victim may know the video is a fake, but those who see it may not think so. Or, deep fakes could be used in more traditional ransomware attacks, where a victim clicks on a link to the incriminating video only to have malicious software downloaded onto the computer or the company network.

“Thankfully, this level of extortion hasn’t been seen in the wild (yet). Nonetheless, the potential for this campaign to destroy a target’s reputation is exceedingly high,” a MalwareBytes blog post reported.

Detecting and Mitigating Deep Fake Threats

Deep fakes can be awfully convincing, but right now there are some ‘tells’ used to detect a fake from the real thing, according to Lee and Maor. The eyes are the most difficult facial feature to duplicate, so if the eyes look strange in a photo, chances are it is a deep fake. Unnatural movements are another sign a video is fake.

Also, users should verify the source of a suspected deep fake and make sure any photo or video is from a reputable source. If the account or the media source looks suspicious, there is a greater likelihood the media is fake. And don’t rely on one source for confirmation—check other media to see if others are reporting the same or if social media applications are warning of a fake. Should you find a deep fake that could be related to cybercrime, contact the FBI to report it.

We are in the beginning stages of deep fakes used in a criminal manner, but this could turn into the next big threat vector. The time is now to prepare for it.

Avatar photo

Sue Poremba

Sue Poremba is freelance writer based in central Pennsylvania. She's been writing about cybersecurity and technology trends since 2008.

sue-poremba has 271 posts and counting.See all posts by sue-poremba

Secure Guardrails