With notoriously the most-expensive drug prices in the world, the United States has failed to use all of the tools in its shed to combat the unending upwards trend. One such important tool is U.S. antitrust law that targets companies that improperly charge monopoly and supracompetitive prices long past their original patent’s expiration. Some companies have found a way to game the regulatory approval system by suing would-be generic competitors and then, under the guise of settlement, paying them to delay their market entry — allowing a brand drug manufacturer to maintain their monopoly prices and continue raking in large profits. The Actavis Supreme Court found these agreements involving reverse payments — also known as pay-for-delay — can violate antitrust laws even in light of the existing patents. This Note argues that in an ongoing case, In re Humira that examines reverse payments between biologic drug companies, the district court was right to engage in an Actavis analysis but did so improperly. In re Humira provides a prime opportunity to strengthen and clarify U.S. jurisprudence on reverse payments and market allocations to reduce ambiguity in an evolving pharmaceutical sphere: biologics and biosimilars. This Note further argues that to harmonize the antitrust treatment of pharmaceuticals — small molecule and biologic — both clear judicial standards and legislation are needed.
This Note proceeds in four parts. Part II discusses various forms of antitrust abuses that arise in the pharmaceutical sphere and that often accompany reverse payment agreements. It follows with the relevant legal and regulatory backgrounds of small and large molecule drugs. Part III then considers the consequences of lax antitrust scrutiny on pharmaceuticals and finishes with an in-depth examination of the In re Humira litigation. Lastly, Part IV proposes a two-fold solution, legal and legislative, to the problems posed by Actavis’s lack of legal clarity. Ultimately, the purpose of this Note is to demonstrate that the way a drug is manufactured, approved, or allowed to compete does not alter the application of antitrust law seeking to rid the market of collusive agreements between rivals.
Historically, the Internal Revenue Code has permitted itemizing taxpayers to deduct state and local tax (SALT) payments on their federal tax returns. While this SALT deduction has been adjusted and refined over the years, it has been a mainstay of the federal tax code. As of December 2017, taxpayers were entitled to deduct the full amount of state and local property tax payments, as well as their choice of either state and local income taxes or sales taxes. The Tax Cuts and Jobs Act of 2017 (TCJA) dramatically altered this provision by setting a $10,000 limit on the amount a taxpayer may deduct from her federal taxable income to account for all state and local tax payments. This $10,000 cap on SALT deductibility is scheduled to expire on December 31, 2025, by which point Congress will likely readdress this issue.
This Note proposes that the next iteration of the SALT deduction scheme should allow for full deductibility of state taxes, while retaining a cap on the deductibility of local taxes. This distinction between the treatment of state and local taxes would reflect the relative advantages of public administration at the state level. State-level funding and provision of public services strikes the optimal balance between the competing goals of local administration and redistributive spending. Instituting full state tax deductibility would incentivize a shift in the funding and provision of redistributive federal programs to the state level, and would further the goals of state autonomy and policy innovation. Moreover, reducing or eliminating local tax deductibility would increase the internal policy consistency of the Internal Revenue Code, mitigate the regressive nature of the SALT deduction, and help reduce residential income segregation.
Algorithms increasingly play a central role in the provision of public benefits, offering government entities previously unimaginable ways of optimizing public services, but they also pose risks of error, bias, and opacity in government decision-making. At present, many publicly-deployed algorithms are created by private companies and sold to government agencies. Given robust protections for trade secrets in the courts and feeble state open records laws, such algorithms, even those with fundamental flaws or biases, may escape regulatory scrutiny. If state and local governments are to avail themselves of the benefits of algorithmic governance without triggering its potential harms, they will need to act quickly to design regulatory systems that are flexible enough to respond to continual innovation yet durable enough to withstand regulatory capture. This Note proposes a novel regulatory solution in the form of a new, independent agency at the state or local level — an Algorithmic Transparency Commission — devoted to the regulation of publicly-deployed algorithms. By establishing such an agency, tailored to the needs of each jurisdiction, state and local governments can continue to enhance their efficiency and safeguard companies’ proprietary information, while also fostering a greater degree of algorithmic transparency, accountability, and fairness.
Erie v. Tompkins requires federal courts to apply state substantive law in diversity suits. In determining the content of the relevant state law, federal judges tend to rely on decisions made by the highest court of the relevant state. Yet decisions subsequent to Erie required federal judges to do more than mechanically apply prior state law decisions; rather, these judges predict how the highest court of the state would rule on the legal issue at that time, thus reducing the possibility of divergent outcomes due to forum. This rule results in the occasional federal court prediction that, if faced with a given legal issue, a state’s highest court would deviate from its previous decisions.
The purpose of this Note is to collect and analyze those cases in which federal judges predict deviations from established state law. This Note compiles and analyzes each case in which a federal court has predicted a change in state law and follows up with the subsequent state high court decision that either verified or rejected that prediction. This Note then categorizes and tallies the various analytical methods used by federal judges in making their decisions, with a table of cases and their utilized methods collected in Appendix I. First, this Note reviews the mid-century Supreme Court decisions that led to the modern predictive method and demonstrates how each federal Circuit Court utilizes that method. Next, this Note discusses problems with the predictive method addressed by scholarship and illustrated with examples from the collected cases. Finally, this Note analyzes the cases in which federal courts predict deviations from established state law and suggests that to improve the verification rate of their predictions of change, federal courts should predict such a divergence only when capable of making certain kinds of arguments.