Truepic BlogDeepfakes’ Deepening Impact on the Law

Matthew F. Ferraro

The increased use of rudimentary deception techniques in visual imagery and the growth of hyperreal synthetic media raises several significant and developing legal issues.  Whether related to their positive uses for entertainment or accessibility, or their misuse for abuse and deception, image alteration and deception pose evolving questions for attorneys, legislators, and businesses.  Here is an overview of some of these considerations:  

Cheapfakes: Crudely edited, mislabeled, or decontextualized imagery, audios, and videos, commonly called “cheapfakes,” are the most common form of manipulated media online today.  Experience shows that it does not require cutting edge synthetic media generation or alteration to propel false narratives.  The surge of cheapfakes will serve to only deepen the distrust many feel toward all media, leading to a growth in the “liar’s dividend.”  That is the benefit malefactors can draw from being able to dismiss authentic media as fake, because the public will be primed to doubt the veracity of all inconvenient evidence.  These challenges will only grow with the democratization and greater believability of advanced synthetic media.

Deepfakes:The advent of artificial intelligence (AI)-generated synthetic media, “deepfakes,” creates new and controversial challenges that legislators, attorneys, and business must now grapple with, such as:

Ownership. Who owns a deepfake?  It is often an open question.  The source data that is fed into the AI generator to create synthetic images may belong to one or more people, affording the rightsholders copyright claims in the generated media.  Determining when the use of underlying source data to create a deepfake is “fair use” or when the output is sufficiently “transformed” from the source imagery that it is no longer covered by copyright law will vary case-by-case.  In the meantime, businesses that want to use deepfakes for commercial purposes will need to consider the provenance of source data, secure appropriate licenses, and address similar intellectual-property implications of their use.

Deepfake-Specific Laws. Legislators around the country have moved with notable speed to legislate around deepfakes.  So far, seven states have passed laws that bar deepfakes of some kind.  Congress has passed and the President has signed four laws related to deepfakes.  About thirty bills on deepfakes in roughly twelve states and Congress are under consideration. 

The laws that have been adopted fall into basically four categories: (1) banning most nonconsensual deepfake pornography, (2) barring many deepfakes of political candidates distributed before an election, (3) regulating the digital likenesses of certain individuals after death, and (4) directing reports and research on deepfake technology and potential countermeasures. 

Traditional Laws. Run-of-the-mill laws that protect a person’s reputation and image can also be applied to the new world of manipulated media.  For example, many states recognize a legal claim of “false light,” where a person can bring a lawsuit when something highly offensive is falsely implied to be true about them.  This standard could govern claims involving a victim’s face that was manipulated by AI to create explicit or offensive content.

Courtroom Evidence. In the recent trial of a January 6 rioter, the defendant’s attorney asked an FBI agent on the stand if an incriminating video of the defendant had been checked to see if it was a “deepfake.”  The witness said the video showed no signs of manipulation or falsity.  The mere line of questioning in such a high-profile case raises important broader questions for litigants about how media evidence will be treated in the courtroom going forward—evidence that, in the past, would have been usually accepted as trustworthy without question.  

As Stanford’s Riana Pfefferkorn has observed, trial lawyers will increasingly need to consider how and when to introduce or attempt to exclude video evidence; judges will have to rule on those motions; experts may be called to verify (or challenge) the veracity of media; and jurors will have to decide for themselves if evidence can be accepted as true or may be the result of manipulation.  And, critically, lawyers will need to reflect on their ethical duties to submit to courts only authentic evidence, while not baselessly stoking jurors’ skepticism for advantage by claiming without good reason that evidence is manipulated. 

The proliferation of deepfakes will occasion what author Nina Schick calls a “social media revolution.”  This burgeoning upheaval has already impacted the legal world, with much more to follow.  Attorneys, businesses, technologists, and society at large will need to work together to address these issues in this new era.

Matthew F. Ferraro (@MatthewFFerraro), a former U.S. intelligence officer, is a visiting fellow at the National Security Institute at George Mason University and a counsel at Wilmer Cutler Pickering Hale & Dorr.

Request more information