CLE Workshop: "Empowering the Public in Algorithm Governance"
Ngozi Okidegbe
Associate Professor of Law, Boston University School of Law
More Information
The Murphy Institute's Center on Law and the Economy hosts workshops each semester featuring both Tulane and guest faculty in law, economics, and political science who present their latest research in regulation, civil rights, the criminal legal system, and other key issues in law and the economy. Hosted by Adam Feibelman, Director of the Center on Law and the Economy and Sumter D. Marks Professor of Law at Tulane Law School, CLE workshops are open to faculty, students, and the Tulane community.
Ngozi Okidegbe is a Moorman-Simon Interdisciplinary Career Development Associate Professor of Law and Assistant Professor of Computing & Data Sciences. Her focus is in the areas of law and technology, evidence, criminal procedure, and racial justice. Her work examines how the use of predictive technologies in the criminal justice system impacts racially marginalized communities.
Professor Okidegbe is a Faculty Associate at the Berkman Klein Center for Internet & Society at Harvard University and an Affiliated Fellow at Information Society Project at Yale Law School. She is also on the program committee of the Privacy Law Scholars’ Conference and serves on the advisory board for the Electronic Privacy Information Center.
Prior to joining Boston University, Professor Okidegbe was an Assistant Professor of Law at Cardozo School of Law, where she first joined as the inaugural Harold A. Stevens Visiting Assistant Professor in 2019. Before joining Cardozo, Professor Okidegbe served as a law clerk for Justice Mbuyiseli Madlanga of the Constitutional Court of South Africa and for the Justices of the Court of Appeal for Ontario. She also practiced at CaleyWray, a labor law boutique in Toronto. Professor Okidegbe holds a Master of Laws from Columbia Law School, where she graduated as a James Kent Scholar.
Professor Okidegbe’s articles have been published or are forthcoming in the Critical Analysis of Law, Connecticut Law Review, UCLA Law Review, Cornell Law Review, and Michigan Law Review.
ABSTRACT
The use of artificially intelligent algorithms in public sector decision-making is under immense scrutiny. Policymakers, activists, and scholars increasingly question why agencies and courts should have unfettered discretion to deploy privately developed algorithms that affect citizens’ rights, liberties, and opportunities without adequate democratic accountability. In recent years, media attention has brought the injustices faced by those subjected to inaccurate, biased, or procedurally unfair algorithmic predictions to light, igniting a nationwide movement to govern algorithmic decision- making in the public sphere. Many jurisdictions have begun heeding to public pressure by passing regulations governing how the state procures, constructs, implements, and oversees algorithms in public sector decision-making – algorithmic governance. But as consensus to limit the massive power that courts and agencies have over algorithmic decision-making grows, one contentious piece of this puzzle remains: What should be the place of the public in governing algorithms. Current and proposed legal frameworks and institutional practices embrace an indirect democratic accountability approach that envisions their place only as voters, litigants or stakeholders.
Lawmakers have accepted the indirect accountability approach and have sought to empower the public in electoral politics and consultative processes (such as notice and comment) to bring state use of algorithms in line with public values and expectations. Though there are reasons for enthusiasm for this reform effort, missing from the conversation is the broader structural critique. The Article argues that the acceptance of the indirect accountability approach rests on two mistaken premises: first, that governing algorithms is purely a technical and technocratic affair and therefore direct control over algorithms should reside with the state and the private developers that it delegates power to; second, that the indirect accountability approach is sufficient for all members of the public, including racially and otherwise marginalized communities most vulnerable to algorithmic-facilitated harms yet least able to rely on courts, legislatures, or electoral politics to redress it. Left uninterrogated, the approach and the premises supporting it threaten to naturalize a place that deprives the public, especially those from the most marginalized communities, of the political and epistemic power needed to both challenge current harmful algorithmic use and to expand the field of authority over governing algorithms beyond legislatures, courts, and agencies.
This Article’s central claim is that the indirect accountability approach is neither inevitable, neutral, nor without consequence. It surfaces a different approach to accountability which consists of giving the public direct decision-making power over key aspects of algorithmic governance that is shared with the state and private industry. By moving toward a direct accountability approach, it offers a path to formally recognize and empower the public as coauthors in the ongoing project of creating a more inclusive, responsive, and democratic iteration of algorithmic governance.