74 Am. U. L. Rev. 1497 (2025).
Abstract
In stark contrast to the recently revoked federal policy meant to incentivize the adoption of safe, secure, and trustworthy artificial intelligence systems, the July 2025 Winning the Race, America’s AI Action Plan succinctly summarizes its approach as “Build, Baby, Build!” Public discourse, as well, vacillates between artificial intelligence (AI) as a savior or an existential threat to society. To make effective decisions about AI governance, policy makers and regulators must not fail to recognize AI systems’ inherent weaknesses. Furthermore, whether citizens have recourse from harmful AI applications will also turn upon judicial balancing of competing interests between technological advancement and the navigation of complicated and interrelated social policy goals. This Article pursues a reasonable balance, arguing that generative AI systems applied to critical areas of life should not be insulated from review by self-designed opacity and civil procedure impediments. It proposes the judicial adoption of a rebuttable presumption of AI malfunction, that certain forms of AI are biased and opaque, thereby placing the duty to explain AI decision making on those who are most able to provide that information. At the same time, the presumption balances policy concerns about preserving the benefits of AI by providing a rebuttable right to developers or vendors. The proposed rebuttable presumption of AI malfunction builds upon Supreme Court precedent of using presumptions to structure obligations of proof to implement everyday understandings more efficiently according to common sense and probability, advance justice and fairness, promote judicial efficiency, and fairly adjudicate the unknown.
This Article defines a rebuttable presumption of AI malfunction that tackles AI bias and flaws, applicable when (1) an advanced AI system (2) is opaque and (3) used to make consequential decisions affecting individuals. Finally, the Article situates the proposal within the context of allegations of disparate impact discrimination, shows how the AI malfunction presumption leads to an AI disparate impact rebuttable presumption, and analyzes a current AI employment discrimination case alleging disparate impact under the proposed presumptions.
* Professor Emerita of Business Law, Virginia Tech.
** Assistant Professor, University of Nebraska Omaha. J.D., Creighton University School of Law, 2011; MBA, University of Nebraska at Omaha, 2007.
Authors wish to thank the participants at the 2024 Data and Ethics Law Colloquium, and participants at the 2024 Academy of Legal Studies in Business, for their helpful and insightful comments.