By Quentin Ullrich
This Note argues that the false association cause of action under Section 43(a)(1)(A) of the Lanham Act is well-suited for addressing problems posed by deepfakes, and outlines for practitioners the mechanics of such a cause of action. A “deepfake,” which is a portmanteau of “deep learning” and “fake,” is a digitally manipulated, often highly realistic video that substitutes the likeness of one person with that of another. Due to the way they deceive their viewers, deepfakes pose a threat to privacy, democracy, and individual reputations. Existing scholarship has focused on defamation, privacy tort, copyright, regulatory, and criminal approaches to the problems raised by deepfakes. These legal approaches may at times be successful at penalizing the creators of pernicious deepfakes, but they are not based on a theory of consumer confusion, which this Note argues is the principal mischief posed by deepfakes. Further, since deepfakes are often uploaded anonymously and the only effective remedy is against website owners, certain of these approaches are frustrated by the Communications Decency Act’s immunization of website owners from liability for torts with a “publication” element. Hence, this Note proposes that the law of false association, which is principally concerned with consumer confusion, is best suited for addressing deepfakes. Importantly, a Lanham Act cause of action would allow victims of deepfakes to sue website owners under a theory of contributory infringement, because the Communications Decency Act does not immunize website owners from intellectual property claims.