Threat Intelligence Blog

Posted May 22, 2020

First emerging in 2017, deepfakes (which can refer to fabricated images, videos, and/or audio recordings) have garnered substantial attention for how legitimate they appear as a result of advancement in the technology used to create them. Initial forays into creating deepfakes focused on swapping celebrity faces onto adult entertainment performers’ bodies. These early attempts were fairly easy to identify; even the casual observer could spot them due to imperfections such as face discoloration, inconsistent lighting, and poorly-synched sound and video. Fast forward three short years, and the technology used to create them has become much more sophisticated, making deepfakes difficult to detect – even when using algorithms to detect them via image analyzation that searches for inconsistencies in pixels, coloration, and/or distortion. At least one researcher believes that, if this technology develops further, it could potentially outpace algorithms’ ability to successfully identify deepfakes.

Early deepfakes implemented generative adversarial networks (GAN) in which machine learning (ML) models competed against one another, with one ML creating the fakes and the other trying to detect it. This process ultimately improved the quality of the product and created a more believable deepfake, but not without its share of problems. However, GANs had a hard time preserving image alignment consistency from one movie frame to the next. According to the world’s largest professional organization dedicated to engineering and applied sciences, current deepfakes are created using a constellation of artificial intelligence (AI) and non-AI algorithms. What is clear is that deepfakes are becoming more polished for audience consumption; perhaps worse, the capacity to produce them is exceeding the ability to detect them.

In today’s interconnected environment, people consume news from a variety of different sources besides mainstream media, increasing the chances that they will come into contact with deepfakes. The versatility of deepfakes facilitates their use in a variety of activities to include, but not limited to:

  • Pornography. In 2018, approximately 95 percent of deepfakes were used in connection with pornography.
  • Disinformation. According to one U.S. think tank, deepfakes could facilitate the propagation of disinformation operations.
  • Political Attacks. Political figures are increasingly at risk of being targeted by deepfakes, as evidenced by a May 2019 deepfake video that featured the Speaker of the House of Representatives muddling her words in an attempt to make her look mentally ill. Although certain platforms now have anti-deception policies in place that would allow for the takedown of such videos, the line between what stays and what gets removed is blurry. This was evident towards the end of April 2020, when the President retweeted a video of former Vice President Joe Biden that had been manipulated to show him twitching his eyebrows and lolling his tongue. Because the video had been labeled a “deep fake” by the poster of origin, it arguably does not violate such anti-deception policies.
  • Crime. In March 2019, innovative criminals used a deepfake audio recording to impersonate a chief executive’s voice to deceive the CEO of a UK-based energy firm to execute a fraudulent wire transfer.

Deepfakes Threaten Businesses

Much of the concern expressed about deepfakes focuses on their potential to compromise the integrity of the political process. However, their potential to spread misinformation, disinformation, and fake information poses a real threat to businesses, their brands, and their bottom lines. In a 2019 disinformation/brand impact study, 78 percent of consumers believed that misinformation negatively impacted brand reputation. Considering that 74 percent of the respondents of this study said they were more willing to do business with a brand that they respected, anything putting consumer-seller trust at risk can be detrimental to organizations. Possible repercussions of deepfakes against companies include, but are not limited to:

Reputation Damage. Deepfakes can be used to tarnish the images and reputations of not only individuals but also of the organizations that they represent. Controversial, off-putting, or inaccurate social media posts by an executive could potentially impact a company’s share price. In 2018 when Elon Musk made a puzzling tweet regarding a buyout of Tesla shares, share prices were negatively affected.

Financial Theft. A U.S. cyber security company detected three instances in which a company’s “CEO” called a senior financial officer to request an immediate transfer of funds. In each case, crafty scammers had mimicked the CEOs’ voices with an AI program that copied their speech from YouTube videos, TED talks, and other publicly-available sources.

Extortion. Deepfake videos are being used to extort people for profit. Extortionists have created deepfakes to humiliate and shame specific targets, placing the videos on porn sites. The scandal of having a prominent corporate officer caught in such an extortion attempt could have negative effects on the company.

Market Manipulation. While no specific incidents of market manipulation have been observed to date, well placed and timed deepfakes could be leveraged to influence the rise and fall of a company’s stock price. As market manipulations require public dissemination of misleading information to succeed, such technology has the potential to cause substantial problems for companies.

What Can Businesses Do?

Detection algorithms and software and digital forensic techniques that employ machine learning to analyze an individual’s style of speech and movement (called a “soft biometric signature”) are being developed; these are intended to be used to protect world leaders from deepfakes. One researcher-developed algorithm in particular claimed nearly a 97 percent success rate in identifying deepfakes, according to their published research paper.

Unless hostile actors stop leveraging deepfake technology, which is unlikely, businesses need to develop plans to mitigate the threat to their brands. While not a comprehensive list, the following suggestions and techniques may aid an organization in protecting itself from deepfakes:

  • Develop and test a deepfake response plan. Understanding how an organization needs to respond to deepfakes should be considered part of an organization’s cyber resiliency.
  • Know the process to request “takedown” of deepfake videos. Develop a points-of-contact list of appropriate stakeholders (law enforcement, Internet Service Provider, etc.) to eliminate unnecessary lag time.
  • Conduct employee training. Develop a training plan to help employees recognize deepfake videos and provide them a process on how to report them. This is important given that-according to Forbes magazine-a Facebook Transparency Report found that the average person cannot identify 40 percen of videos as deepfakes.

For more information, request to see LookingGlass Cyber Solutions, Inc.’s STRATISS Report titled “Deepfakes: An Emerging Tool for Threat Actors” that was published January 6, 2020.

Additional Posts

Quarterly Threat Landscape Review for Q1 2020

In the first quarter of 2020, LookingGlass Cyber Solutions™ (LookingGlass) observed significant ...

Early 2020 Insights from LookingGlass’ Quarterly Threat Landscape Review

We face unprecedented times as security teams race to enable the rapid digital transformation ...