12 posts

The Internet of Things and Potential Remedies in Privacy Tort Law

By Alexander H. Tran

The Internet of Things (IoT) is an intriguing digital phenomenon in technology that creates many legal challenges as the world becomes more interconnected through the Internet. By creating a connected system, the IoT links a network of physical objects, like consumer devices, and enables these devices to communicate and exchange data. In the very near future, almost every consumer device, from cars to a coffee mug, may connect through the Internet. The IoT has incredible potential to better society by providing immense amounts of rich sensory data for analytics and other uses. Nevertheless, there are also many latent dangers that could manifest as the IoT proliferates, including privacy violations and security risks.

The legal scholarship surrounding privacy issues with respect to the IoT is currently underdeveloped. This Note adds to the discussion of privacy law by analyzing the legal repercussions of the IoT and its relationship to privacy tort law. It summarizes the foundations of privacy law and current regulations that apply to the IoT and concludes that current laws and regulations provide limited remedies for consumers harmed by the IoT. It then provides a potential solution by suggesting that two privacy torts, the public disclosure of private facts tort and the intrusion upon seclusion tort, can provide partial civil remedies for those consumers. Each of the two privacy torts has evolved in different ways since its creation, and this Note explores the advantages and disadvantages of both. Finally, this Note advocates for the expanded use and revitalization of these privacy torts through judicial application in IoT cases as a potential strategy for regulating the IoT.

Download Article

The Privacy Case for Body Cameras: The Need for a Privacy-Centric Approach to Body Camera Policymaking

By Ethan Thomas

Body-mounted cameras are being used by law enforcement with increasing frequency throughout the United States, with calls from government leaders and advocacy groups to further increase their integration with routine police practices. As the technology becomes more common in availability and use, however, concerns grow as to how more-frequent and more-personal video recording affects privacy interests, as well as how policies can both protect privacy and fulfill the promise of increased official oversight.

This Note advocates for a privacy-centric approach to body camera policymaking, positing that such a framework will best serve the public’s multifaceted privacy interests without compromising the ability of body cameras to monitor law-enforcement misconduct. Part I surveys the existing technology and commonplace views of privacy and accountability. Part II examines the unique privacy risks imposed by the technology as well as the countervailing potential for privacy enhancement, demonstrating the value of an approach oriented around privacy interests. Part III assesses how the failure to adopt this approach has resulted in storage policies for body camera footage that inhibit the technology’s ability to best serve the public and suggests that a privacy-centric perspective can lead to better policymaking. Finally, Part IV examines the flaws of prevailing views with respect to policies for accessing footage and discusses how a revised privacycentric perspective could lead to better policies.

Download Article

Does Brady Have Byte? Adapting Constitutional Disclosure for the Digital Age

By Hilary Oran

Under Brady v. Maryland and its progeny, prosecutors have a constitutional obligation to disclose any material evidence that may be favorable to the defendant. Despite a prosecutor’s best efforts to comply, there are inherent difficulties associated with identifying such documents. For instance, discerning what is “material” requires anticipating, before trial, how all the evidence will come together during trial. Further, finding this evidence may resemble the proverbial search for a “needle in a haystack” when the amount of evidence becomes copious. This search becomes even more daunting in an age of voluminous electronic discovery that spans from digital files to social media to e-mails, potentially amounting to over a million pages of documents.

This category of discovery was foreign to the judicial system at the time of Brady’s 1963 decision. However, despite the transformation of discovery since then, prosecutors’ constitutional disclosure obligations remain unchanged. Accordingly, there is currently no uniform approach to assess potential Brady violations premised on high volume electronic discovery. This Note will explore the current practices for adapting Brady for the digital age. Ultimately, this Note advocates for a new standard that requires prosecutors to adhere to recognized, minimum requirements when divulging a case file, but provides for circumstances in which a defendant’s limited resources require the prosecution to surpass this benchmark in order to fulfill its constitutional obligation.

Download Article

Access Denied: Data Breach Litigation, Article III Standing, and a Proposed Statutory Solution

By Patrick Lorio

As businesses and individuals increasingly rely on electronic technology to facilitate transactions, hackers have taken advantage of the weaknesses of data security systems intended to protect sensitive information. As a result, hackers have gained access to individuals’ personal and financial information. American law, however, has been slow to catch up to the threat posed by data security breaches. Although breaches have become commonplace in the past decade, victims of data breaches are often denied their day in court. Instead, many federal courts find that plaintiffs who sue companies for failing to adequately protect their private information lack Article III standing, the constitutional doctrine that requires plaintiffs to show an “injury-in-fact” in order to sue in federal court. While some jurisdictions hold that hackers having access to individuals’ information is sufficient to confer Article III standing, other jurisdictions dismiss plaintiffs’ cases unless the plaintiffs can demonstrate unreimbursed financial loss directly attributable to the data breach, a very high bar to reach.

The purpose of this Note is threefold. First, I analyze the existing split within the U.S. Courts of Appeals with regard to the correct theory of Article III standing to apply in data breach cases. The circuit split primarily involves disputes over the correct interpretation of Clapper v. Amnesty International, a 2013 U.S. Supreme Court case dealing with the “imminency” requirement of Article III standing’s injury-in-fact component. Second, I predict what the recent holding in Spokeo v. Robbins (2016) portends for data breach victims. Spokeo heightened the scrutiny that federal courts must place on the “concreteness” of injury in addition to the inquiry into “imminency.” Finally, I propose that the strict Article III standing requirements articulated by the Supreme Court in both Clapper and Spokeo necessitate action by Congress. I argue that Congress should pass a comprehensive data breach statute that would confer standing upon victims of data breach. I conclude by showing how a recent Third Circuit decision demonstrates the viability of a statutory solution to the problem encountered by data breach victims.

Download Article

Tiebreaker: An Antitrust Analysis of Esports

By Max Miroff

Electronic sports (esports) offers a novel case study in how antitrust analysis should approach multi-sided markets that rely on the ability of numerous entities to access intellectual property (IP). A game publisher’s IP in its game allows for permissible monopolization, but also creates opportunities for anticompetitive IP misuse. Tournament organizers, teams, players, broadcasters, spectators, and advertisers all need access to publishers’ IP to participate in esports markets. As publishers vertically integrate into the downstream market for esports content in their games, they rationally seek to minimize competitive pressure from other entities in the market. A publisher can do this by using its IP monopoly in its game to dominate the downstream esports market in its game by, for example, refusing to license broadcast rights to independent tournament organizers. This Note argues that in order to promote consumer welfare through market competition, antitrust law should restrict game publishers from using IP rights in their games to monopolize the downstream esports market for those games. Because multi-sided markets which rely on access to IP and blur the lines between producer, intermediary, and consumer are likely to grow, the stakes for effective antitrust analysis in these markets will only continue to climb.

Part I introduces the esports industry and overviews how antitrust law can be used to shape more competitive markets for the benefit of esports consumers. Part II provides an economic analysis of esports in order to define antitrust-relevant esports markets in which enforcement could be appropriate. Part III outlines the structure of a tying claim against publishers that use their IP monopoly over their games to acquire or maintain a monopoly over esports content produced with their games. Part IV contends that a publisher’s IP rights should not insulate it from liability for downstream anticompetitive behavior. Part V argues that antitrust enforcement would be superior both to the creation of an independent esports governance body, because such enforcement would facilitate market solutions rather than top-down rulemaking, and to the creation of a fair use exemption for esports, because such an exemption would be comparatively overbroad.

Download Article

Characterizing the Harms of Compromised Genetic Information for Article III Standing in Data Breach Litigation

By Terry Wong

As direct-to-consumer genetic testing has proliferated, individuals face a heightened risk of having their genetic information exposed in data breaches. In response to these breaches, individuals that turn to the federal courts as an avenue for recovery must overcome the legal barriers that have often frustrated victims in traditional data breach contexts. In particular, these plaintiffs have struggled due to the circuit split among the U.S. courts of appeals over whether certain harms are sufficient to confer Article III standing in data breach cases. While federal courts continue to debate over the sufficiency of traditional data breach harms, compromises of genetic information raise exceptional considerations and harms that should favor the conferral of Article III standing.

This Note analyzes that the implications of data breaches involving compromised genetic information that justify an expansive approach to the conferral of Article III standing. Part II of this Note surveys the growing prevalence of data breaches and discusses the common legal obstacles that victims face in seeking recovery against breached entities. Part III outlines the relevant Article III standing requirements and reviews the circuit split among the U.S. courts of appeals by focusing on the primary hurdle for data breach victims — establishing injury in fact. Part IV raises and analyzes the exceptional features and implications of data breaches involving genetic information. In doing so, this Part characterizes the potential harms resulting from genetic information compromise and discusses how they should impact the Article III standing analysis to satisfy the injury-in-fact requirement.

Download Article

Algorithmic Harms to Workers in the Platform Economy: The Case of Uber

By Zane Muller

Technological change has given rise to the much-discussed “gig” or “platform economy,” but labor law has yet to catch up. Platform firms, most prominently Uber, use machine learning algorithms processing torrents of data to power smartphone apps that promise efficiency, flexibility, and autonomy to users who both deliver and consume services. These tools give firms unprecedented information and power over their services, yet they are little-examined in legal scholarship, and case law has yet to meaningfully address them. The potential for exploitation of workers is immense, however the remedies available to workers who are harmed by algorithm design choices are as yet undeveloped.

This Note analyzes a set of economic harms to workers uniquely enabled by algorithmic work platforms and explores common law torts as a remedy, using Uber and its driver-partners as a case study. Part II places the emerging “platform economy” in the context of existing labor law. Part III analyzes the design and function of machine learning algorithms, highlighting the Uber application. This Part of the Note also examines divergent incentives between Uber and its users alongside available algorithm design choices, identifying potential economic harms to workers that would be extremely difficult for workers to detect. Part IV surveys existing proposals to protect platform workers and offers common law causes of action sounding in tort and contract as recourse for workers harmed by exploitative algorithm design.

Download Article

Remedying Public-Sector Algorithmic Harms: The Case for Local and State Regulation via Independent Agency

By Noah Bunnell

Algorithms increasingly play a central role in the provision of public benefits, offering government entities previously unimaginable ways of optimizing public services, but they also pose risks of error, bias, and opacity in government decision-making. At present, many publicly-deployed algorithms are created by private companies and sold to government agencies. Given robust protections for trade secrets in the courts and feeble state open records laws, such algorithms, even those with fundamental flaws or biases, may escape regulatory scrutiny. If state and local governments are to avail themselves of the benefits of algorithmic governance without triggering its potential harms, they will need to act quickly to design regulatory systems that are flexible enough to respond to continual innovation yet durable enough to withstand regulatory capture. This Note proposes a novel regulatory solution in the form of a new, independent agency at the state or local level — an Algorithmic Transparency Commission — devoted to the regulation of publicly-deployed algorithms. By establishing such an agency, tailored to the needs of each jurisdiction, state and local governments can continue to enhance their efficiency and safeguard companies’ proprietary information, while also fostering a greater degree of algorithmic transparency, accountability, and fairness.

Download Article

Commercial Free Speech Constraints on Data Privacy Statutes After Sorrell v. IMS Health

By Bastian Shah

Collection and use of big data drive the modern information economy. While big data can produce valuable innovations, it also comes with perils for consumers. In particular, consumers have little ability to protect their privacy online and are unnerved by the hyper-targeted advertising to which they are subjected. In response to these concerns, American states have begun enacting general data privacy laws similar to those passed in Europe. At the same time, the United States Supreme Court has grown wary of laws attempting to restrict companies from distributing and using data for advertising purposes. For instance, in Sorrell v. IMS Health, the Court found that a Vermont statute aimed at preventing targeted advertising by pharmaceutical manufacturers violated the commercial free speech doctrine. Since Sorrell, the constitutionality of data privacy statutes has been ambiguous.

This Note argues that data privacy laws that empower consumers to meaningfully protect their privacy by opting out of unwanted data collection do not violate the commercial free speech doctrine. Part II defines data privacy and summarizes the objectives current data privacy laws seek to achieve. Part III analyzes commercial speech jurisprudence before and after Sorrell and discusses the effect of Sorrell on commercial free speech jurisprudence and data privacy law. Part IV argues that government interest in empowering consumers by giving them meaningful choices in their online privacy is important enough to survive scrutiny under the post-Sorrell commercial free speech paradigm.

Download Article

Price Gouging, the Amazon Marketplace, and the Dormant Commerce Clause

By Julia Levitan

This Note argues that states can regulate price gouging on the Amazon Marketplace without offending the dormant commerce clause. Part I provides an overview of state price gouging statutes and enforcement efforts. Part II examines the reported price gouging, including on the Amazon Marketplace, in connection with the COVID-19 pandemic. Part III explains the dormant commerce clause jurisprudence, with a particular emphasis on the doctrine’s application to state laws governing internet activities. Part IV considers the dormant commerce clause implications of regulating price gouging on the Amazon Marketplace and concludes that state price gouging laws can be enforced against both Amazon and its third-party sellers without violating the dormant commerce clause. Part IV also places the two enforcement targets in an optimal deterrence framework and identifies Amazon as the ideal regulatory target to effectuate robust enforcement of price gouging prohibitions.

Download Article

Is This Video Real? The Principal Mischief of Deepfakes and How the Lanham Act Can Address It

By Quentin Ullrich

This Note argues that the false association cause of action under Section 43(a)(1)(A) of the Lanham Act is well-suited for addressing problems posed by deepfakes, and outlines for practitioners the mechanics of such a cause of action. A “deepfake,” which is a portmanteau of “deep learning” and “fake,” is a digitally manipulated, often highly realistic video that substitutes the likeness of one person with that of another. Due to the way they deceive their viewers, deepfakes pose a threat to privacy, democracy, and individual reputations. Existing scholarship has focused on defamation, privacy tort, copyright, regulatory, and criminal approaches to the problems raised by deepfakes. These legal approaches may at times be successful at penalizing the creators of pernicious deepfakes, but they are not based on a theory of consumer confusion, which this Note argues is the principal mischief posed by deepfakes. Further, since deepfakes are often uploaded anonymously and the only effective remedy is against website owners, certain of these approaches are frustrated by the Communications Decency Act’s immunization of website owners from liability for torts with a “publication” element. Hence, this Note proposes that the law of false association, which is principally concerned with consumer confusion, is best suited for addressing deepfakes. Importantly, a Lanham Act cause of action would allow victims of deepfakes to sue website owners under a theory of contributory infringement, because the Communications Decency Act does not immunize website owners from intellectual property claims.

Download Article

Putting the Blindfolds on Driverless Panopticons

By Alastair Pearson

Autonomous vehicle (AV) deployment will radically reshape the relationship between Americans and their cars. A society which has long prized private car ownership will see riders transition to dramatically cheaper robotaxi services. Cities will regulate AVs in real time, using a sophisticated new regulatory technology called Mobility Data Specification (MDS). The widespread use of AVs owned by impersonal operators and regulated by municipal governments will bring to the fore privacy questions which were more easily ignored when cities were using MDS to regulate more niche modes of transportation like e-scooters. Mass adoption of AVs will elevate the stakes of Fourth Amendment concerns about the collection and analysis of anonymous geolocation data.

This Note aims to answer the important question of whether commercially deployed AVs can constitutionally be subjected to regulatory programs that mirror MDS as currently applied to the regulation of e-scooters. Robust scholarship is emerging about the scope of the concept of inescapability, first introduced in Carpenter v. United States, the Supreme Court’s most meaningful effort to erect guardrails around location data. Scholars are also exploring how the third-party doctrine undermines Fourth Amendment values, and the breadth of modern administrative search doctrine. This Note builds on these critiques and proposals to argue that the Fourth Amendment will impose limits on cities seeking to track real-time location data from AVs. AVs are likely to become inescapable, and the data collected from the public will be uniquely sensitive. If cities want the power to demand real-time data from AVs, they will need to rigorously justify their collection of such data and take concrete steps to anonymize it.

Download Article