The IT Law Wiki
Advertisement

Definition[]

A deep fake (a portmanteau of "deep learning" and "fake") (also spelled deepfake) is

a technique for human image synthesis based on artificial intelligence. It is used to combine and superimpose existing images and videos onto source images or videos using a machine learning technique known as generative adversarial network.
[a] digital picture or video that has been maliciously edited using an algorithm in a way that makes the video appear authentic.[1]
a video, photo, or audio recording that seems real but has been manipulated using artificial intelligence (AI). The underlying technology can replace faces, manipulate facial expressions, synthesize faces, and synthesize speech. These tools are used most often to depict people saying or doing something they never said or did.[2]

Overview[]

Because of these capabilities, deep fakes have been used to create fake celebrity pornographic videos or revenge porn. Deep fakes can also be used to create fake news and malicious hoaxes."Deepfakes are usually pornographic and disproportionately victimize women. However, deepfakes can also be used to influence elections or incite civil unrest."[3]

Deep fakes "could present a variety of national security challenges in the years to come. As these technologies continue to mature, they could hold significant implications for congressional oversight, U.S. defense authorizations and appropriations, and the regulation of social media platforms."[4]

How are deep fakes created?[]

Though definitions vary, deep fakes are most commonly described as forgeries created using techniques in machine learning (ML) — a subfield of AI — especially generative adversarial networks (GANs). In the GAN process, two ML systems called neural networks are trained in competition with each other. The first network, or the generator, is tasked with creating counterfeit data — such as photos, audio recordings, or video footage — that replicate the properties of the original data set. The second network, or the discriminator, is tasked with identifying the counterfeit data. Based on the results of each iteration, the generator network adjusts to create increasingly realistic data. The networks continue to compete — often for thousands or millions of iterations — until the generator improves its performance such that the discriminator can no longer distinguish between real and counterfeit data.

Though media manipulation is not a new phenomenon, the use of AI to generate deep fakes is causing concern because the results are increasingly realistic, rapidly created, and cheaply made with freely available software and the ability to rent processing power through cloud computing. Thus, even unskilled operators could download the requisite software tools and, using publically available data, create increasingly convincing counterfeit content.

How could deep fakes be used?[]

Deep fake technology has been popularized for entertainment purposes — for example, social media users inserting the actor Nicholas Cage into movies in which he did not originally appear and a museum generating an interactive exhibit with artist Salvador Dalí. Deep fake technologies have also been used for beneficial purposes. For example, medical researchers have reported using GANs to synthesize fake medical images to train disease detection algorithms for rare diseases and to minimize patient privacy concerns.

Deep fakes could also be used for nefarious purposes. State adversaries or politically motivated individuals could release falsified videos of elected officials or other public figures making incendiary comments or behaving inappropriately. Doing so could, in turn, erode public trust, negatively affect public discourse, or even sway an election.

Indeed, the U.S. intelligence community concluded that Russia engaged in extensive influence operations during the 2016 presidential election to "undermine public faith in the U.S. democratic process, denigrate Secretary Clinton, and harm her electability and potential presidency." In the future, convincing audio or video forgeries could potentially strengthen similar efforts.

Deep fakes could also be used to embarrass or blackmail elected officials or individuals with access to classified information. Already there is evidence that foreign intelligence operatives have used deep fake photos to create fake social media accounts from which they have attempted to recruit Western sources. Some analysts have suggested that deep fakes could similarly be used to generate inflammatory content — such as convincing video of U.S. military personnel engaged in war crimes — intended to radicalize populations, recruit terrorists, or incite violence.

In addition, deep fakes could produce an effect that professors Danielle Keats Citron and Robert Chesney have termed the "Liar's Dividend"; it involves the notion that individuals could successfully deny the authenticity of genuine content — particularly if it depicts inappropriate or criminal behavior — by claiming that the content is a deep fake. Citron and Chesney suggest that the Liar's Dividend could become more powerful as deep fake technology proliferates and public knowledge of the technology grows.

How can deep fakes be detected?[]

Today, deep fakes can often be detected without specialized detection tools. However, the sophistication of the technology is rapidly progressing to a point at which unaided human detection will be very difficult or impossible.

The Defense Advanced Research Projects Agency (DARPA) currently has two programs devoted to the detection of deep fakes: Media Forensics (MediFor) and Semantic Forensics (SemaFor). MediFor is developing algorithms to automatically assess the integrity of photos and videos and to provide analysts with information about how counterfeit content was generated. The program has reportedly explored techniques for identifying the audio-visual inconsistencies present in deep fakes, including inconsistencies in pixels (digital integrity), inconsistencies with the laws of physics (physical integrity), and inconsistencies with other information sources (semantic integrity).

Similarly, SemaFor seeks to develop algorithms that will automatically detect, attribute, and characterize (i.e., identify as either benign or malicious) various types of deep fakes. This program will catalog semantic inconsistencies or unusual facial features or backgrounds — and prioritize suspected deep fakes for human review. Both SemaFor and MediFor are intended to improve defenses against adversary information operations.

Policy considerations[]

Some analysts have noted that algorithm-based detection tools could lead to a cat-and-mouse game, in which the deep fake generators are rapidly updated to address flaws identified by detection tools. For this reason, they argue that social media platforms — in addition to deploying deep fake detection tools — may need to provide a means of labeling and/or authenticating content. This could include a requirement that users identify the time and location at which the content originated or that they label edited content as such.

Other analysts have expressed concern that regulation of deep fake technology could impose undue burden on social media platforms or lead to unconstitutional restrictions on free speech and artistic expression. These analysts have suggested that existing law is sufficient for managing the malicious use of deep fakes. Some experts have asserted that responding with technical tools alone will be insufficient and that instead the focus should be on the need to educate the public about deep fakes and minimize incentives for creators of malicious deep fakes.

References[]

  1. Cyberspace Solarium Commission - Final Report, at 133.
  2. GAO, Deconstructing Deepfakes - How do they work and what are the risks? (Oct. 20, 2020) (full-text).
  3. Id.
  4. Deep Fakes and National Security, at 1.

Source[]


This page uses Creative Commons Licensed content from Wikipedia (view authors). Smallwikipedialogo.png
Advertisement