Skip to main content

Ohio State Technology Law Journal

Most Recent Print Issue
Fall 2024
Volume 21, Issue 1
Article
Giulia G. Cusenza
Litigating Governmental Use of AI

In the last decade courts across the country have ruled on cases related to the use of AI by governmental bodies. But while legal disputes have served as trailblazer for relevant policy documents and have been used by scholars to support specific arguments, this litigation has not been the subject of a systematic analysis. This paper fills this gap and provides a quantitative and qualitative study of how courts deal with litigation on the use of AI by governmental bodies. The analysis leads to an overarching conclusion, namely that judicial decisions almost exclusively rely on procedural grounds—specifically those concerning due process infringements — thus suggesting that substantial issues are typically addressed through procedural solutions. In turn, these procedural issues consist of six violations: lack of adequate notice and explanation, lack of contestability, lack of human oversight, lack of notice and comment procedures, lack of assessment procedures, and denial of the right to access information. By revealing this tendency and by identifying the six procedural violations, the analysis ultimately provides a taxonomy of the minimum requirements that any governmental body should comply with to shield its use of algorithmic systems from judicial review.

Article
Noa Mor
Reduction: AI, Power, and Opacity In Content Moderation's Backstage

In recent years, social media platforms have been quietly transforming the content moderation framework by increasingly relying on the AI-driven reduction of content visibility rather than on its outright removal. Initially applied to clickbait, misinformation, and sensitive content, reduction is now used by these platforms to demote users’ exposure to information across all content categories. Among other types of content, this moderation strategy is now applied to the vast realm of content that borders with the platforms’ removal policy (but does not reach it) or is likely to violate it (but its violation is not confirmed). It thereby elevates the entire normative threshold for permissible content and erodes the scope of information available to users. Alongside its widespread application, reduction’s impact stems from its efficacy in limiting views and its flexible and multifaceted nature. Unlike removal, reduction employs various methods, including downranking content, adjusting the recommendation system, excluding content from dominant areas in the platform, combining reduction with other sanctions, integrating designated choice architecture, and outsourcing reduction options to users.

Despite its far-reaching implications for the informational landscape, digital platforms implement reduction using patchwork, short, and opaque guidelines of doubtful legitimacy. The platforms also fail to adequately provide data concerning reduction through their Transparency Reports, inform sanctioned users, offer explanations, or allow appeals. The unaccountable and sweeping application of reduction also undermines the rule of law, procedural fairness, freedom of expression and other human rights. Its undisturbed development relies, to a great extent, on diverting our attention toward a more celebrated direction: removal and the policies governing it, which introduce a more detailed, carefully updated, and publicly scrutinized measure for guiding behavior. This Article aims to cast light on the evolution and application of reduction, its dramatic impact, and the way it is being concealed in the backstage of content moderation. It also examines the legitimacy of the motivations behind reduction and the legal and AI-related challenges it poses. Finally, the Article offers a way forward, outlining how we can tackle reduction’s challenges while harnessing its sophisticated nature to benefit our future digital sphere.  

Article
Hannah E. Jankunis
Skinny Without Intent To Induce

For decades, the American pharmaceutical industry relied on the sturdy foundation of the Hatch-Waxman Act, legislation that prioritizes innovation, affordability, and consumer well-being. However, in 2021, the Court of Appeals for the Federal Circuit forged a new path forward, threatening the traditional understanding of “intent” in what appeared to be a straightforward induced infringement case, GlaxoSmithKline v. Teva Pharm. USA, Inc. The court indicated that it was mechanically implementing established law, but a mere hairline fracture in application reverberated across the pharmaceutical field and attracted criticism for destroying the balance defined in Hatch-Waxman.

This note explores the court’s analysis of the “intent” prong of induced infringement and concludes that a flawed application of the law unjustly penalized Teva Pharmaceuticals and revised the established understanding of “intent” as an element of induced infringement. The legal guessing game that now permeates the American pharmaceutical industry begs the Supreme Court to provide clarity and direction in hopes of once again better serving the American people in need of affordable, life-changing drugs.

Article
Mackenzie K. Kneiss
Carbon Sequestration Technology and the Climate Crisis: Could Corporations "Takeback" Their Emissions?

Global warming is one of the biggest challenges facing modern society. However, energy policy and environmental technology have been stagnant, particularly in the United States. Carbon Sequestration Technology is a prime example of this stagnation. Originally hailed as a saving grace, the technology has not advanced at the rates scientists and industry alike hoped for. Carbon sequestration refers to any method by which carbon is extracted from the atmosphere and placed back into the biosphere, traditionally the realm of plants. But carbon sequestration technology typically referred to as carbon capture technology, promises to give humans the ability to sequester carbon on a massive scale. The technology, however, has been unable to perform at that massive scale. Legislation has been passed that aims to incentivize further development of the technology, most notably the Inflation Reduction Act, but meaningful progress has remained stubbornly out of reach.

This Note considers the advantages and disadvantages of carbon sequestration technology. It then offers an analysis of a carbon takeback obligation in the context of the United States regulatory and legislative environment. A carbon takeback obligation would require producers of fossil fuels to have extended responsibility for the waste their products create. Specifically, producers would be required to sequester an increasing percentage of the emissions created by their products. Such a scheme has been received favorably by politicians and researchers in other countries but has yet to gain traction in the United States. Nonetheless, with growing public support for comprehensive climate change measures, the current political landscape may present an opportune moment for the government to pursue more ambitious policies.

Article
Aliah Richter
A Source of Deference or Interference?: An Investigation of the Impact of Medical Artificial Intelligence on the Standard of Care

Artificial intelligence’s (AI’s) capacity to optimize patient health outcomes has gained significant notoriety within the health care industry. However, known risks of AI employment in clinical practice prompt questions about who should be held liable when patient care goes awry. Traditional doctrines of tort liability may be inadequate to resolve these questions, but modifications to the standard of care have been suggested. To assess physicians’ liability risk in the advent of AI, as well as understand how interests in optimizing clinical care, redressing patient harms, and incentivizing technological innovation can best be balanced, this Note analyzes the legal and policy implications of different standard of care modifications. To start, this Note discusses the importance of understanding the standard of care and how it may change with medical AI adoption. This Note then evaluates three ways AI may modify the standard of care and concludes that the standard of care should preserve providers’ choice to rely or not rely on AI and scrutinize its recommendations, as well as require review of AI-based systems before application in clinical practice. Based on the recognition that physicians may struggle to properly review AI tools, this Note ends by offering recommendations to aid this critical requirement.