2026 Murphy-Tulane Research Symposium "AI: From Code to Consequences"

AI: from code to consequences graphic.

 

Video Playlist    Program Guide

 

Artificial Intelligence, and the broader digital revolution of the past 75 years, has transformed how people work, learn, and connect, while also introducing significant new challenges. The AI: From Code to Consequences symposium addressed the central questions raised by this transformation: What is the current state of these technologies, and how can their adoption be guided toward a more beneficial future?

 

Panel of speakers on stage at 2026 Murphy-Tulane Research Symposium
Left to right: Panel moderator Yi-Jen (Ian) Ho, Frank Pasquale, Gregg Weiss, Gal Oestreicher-Singer, Prasanna (Sonny) Tambe, and Kevin Hong.

 

Held April 17–18, 2026, the two‑day multidisciplinary symposium convened more than a dozen experts at the New Orleans Culinary and Hospitality Institute (NOCHI) and Tulane University’s Uptown campus. The event was hosted by The Murphy Institute in partnership with the Tulane School of Science and Engineering, the Tulane A.B. Freeman School of Business, the Tulane School of Architecture and Built Environment, Tulane Law School, the Tulane School of Professional Advancement, and the Tulane Connolly Alexander Institute for Data Science.

Through keynote addresses and panel discussions, participants engaged with the latest research on current and emerging AI technologies, examined AI’s impact on the labor market, and explored frameworks for the responsible and ethical design and deployment of AI systems.

 

Opening Keynote: "The Good, the Bad, and the Ugl-AI”

The symposium opened with a keynote address by Michael Littman, Associate Provost for Artificial Intelligence and University Professor of Computer Science at Brown University, moderated by Walter Isaacson, Leonard A. Lauder Professor of American History and Values at Tulane.

Littman started with describing AI as not just a technology problem but a societal one, explaining, “We released something incredibly powerful before we had clarity about how we wanted to live with it.”

 

Michael Littman gives the Opening Keynote at the 2026 Murphy-Tulane Research Symposium.
Michael Littman, Associate Provost for Artificial Intelligence and University Professor of Computer Science, Brown University.

 

Drawing on examples ranging from AlphaFold’s breakthroughs in protein structure prediction to the widespread adoption of large language models, Littman highlighted the tension between benefit and risk.

 

“AI can be astonishingly helpful and deeply misleading at the same time. That tension is what makes this moment so challenging—and so important.”

 

Isaacson situated the AI moment within a longer historical arc of technological disruption, emphasizing its distinct impact on cognition and creativity.

 

“Every major technological revolution inspires fear, but this one is different because it reshapes how we think, not just how we work.”

 

Panel: "What Is AI? A Multidisciplinary Perspective"

The symposium’s first panel brought together scholars from law, public administration, information systems, and philosophy to interrogate a deceptively simple question: what do we mean when we say, “artificial intelligence?” Speakers emphasized that AI is not a monolithic entity, but a constellation of tools, practices, and institutional choices shaped by human decision‑making.

Ari Waldman, professor of law at UC Irvine, framed AI less as intelligence and more as prediction. She cautioned that AI functions primarily as prediction: systems that marshal data about humans to make consequential judgments about human lives.

From the public sector, Sean McSpadden, former Oregon Deputy State CIO, described AI as a governance challenge as much as a technical one.

 

“Ultimately, these tools can assist public servants, but accountability for the final decision can never be delegated to a system.”

 

Panel discussion with five people on stage in front of a screen with code.
Left to right: Nick Mattei, Ari Waldman, Sean McSpaden, Ahmed Abbasi, and Susan Schneider.

 

Ahmed Abbasi, professor at the University of Notre Dame and co‑director of the Human‑Centered Analytics Lab, traced the evolution of AI from prediction to generative systems. He pointed to the change in capability as AI now handles tasks that were once considered safely human‑centered.

Philosopher Susan Schneider cautioned against hype‑driven narratives about intelligence and consciousness.

 

“We should stop obsessing over artificial general intelligence and instead focus on the very real epistemic power these systems already have over how we think and decide.”

 

Across disciplines, panelists agreed that AI’s defining feature is not autonomy, but its embedding in social, legal, and institutional contexts

 

Panel: "AI and the Labor Market—Disruption, Adaptation, and Opportunity"

A featured panel on day one explored how AI is already reshaping labor markets, with impacts that are uneven across sectors, skill levels, and communities.

Frank Pasquale, professor of law and policy scholar, cautioned against simplistic predictions.

 

“We need to be comfortable telling rival narratives. The future of work under AI isn’t deterministic—it’s radically indeterminate.”

 

He also emphasized the vulnerability and value of human-centered professions, pointing to forms of labor that depend on trust, meaning, and authenticity. 

Gregg Weiss, Palm Beach County Commissioner and former mayor, grounded the discussion in concrete policy experience, recalled the loss of 1,400 professional services jobs in a single county and emphasized that adaptation cannot rest solely on individuals, but must be supported by public policy.

From an educational perspective, Gal Zauberman, vice provost and professor of business, reframed the question from job loss to skill transformation.

 

“The biggest change isn’t task replacement; it’s a redefinition of skills. Organizations aren’t lowering expectations; they’re raising them.”

 

Sunny Tomi, professor at the Wharton School, highlighted concerns surrounding the burden of reskilling workers as not everyone has the same ability to adapt at speed. 

 

Panel: "Responsible and Ethical AI"

One of the symposium’s most wide‑ranging discussions focused on the ethical, legal, and governance challenges posed by AI systems already deployed in high‑stakes domains including healthcare, finance, and criminal justice.

Moderator John Leventis underscored the scale of the issue, stating that AI is already deciding who gets a loan, who gets parole, and who gets hired.

Throughout the discussion, panelists emphasized fairness, accountability, and transparency as unresolved and urgent challenges. Francesca Rossi, IBM Fellow and global leader for AI ethics at IBM, emphasized realism over perfection. “The goal is not to eliminate bias,” she explains, “but to be transparent about which biases exist and how they are being mitigated.”

Anjana Susarla speaks into a microphone at a panel.

Anjana Susarla, professor at Michigan State University, warned that algorithmic systems often amplify existing inequalities.

 

“Digital platforms don’t just reflect social divides—they exacerbate them.”

 

Legal scholar Mark Geistfeld highlighted that accountability mechanisms already exist to establish responsibility through tort laws which hold companies liable for harm caused by biased systems.

John Dickerson of Mozilla argued that openness is essential to meaningful oversight.

 

“Fairness and safety can’t be a black box. If we want accountability, the code and assumptions have to be visible.”

 

Despite differing views on regulation and open‑source models, panelists broadly agreed that ethical AI is not a barrier to innovation, but a prerequisite for durable and trustworthy systems.

 

Closing Keynote: "Agentic AI and the Pace of Change"

The first day concluded with a keynote conversation featuring Jon Krohn—host of the SuperDataScience podcast and co‑founder and CEO of Y Carrot—in discussion with John Renne, Henry Shane Professor in Real Estate and Urban Planning at Tulane.

Krohn emphasized that today’s agentic AI systems mark a qualitative shift, as models increasingly act autonomously to pursue defined goals. While acknowledging disruption, Krohn struck a cautiously optimistic note about work and opportunity.

 

“Your job isn’t necessarily doomed, but parts of it will change. The people who thrive will be the ones who learn to work with AI, not around it.”

 

Renne connected these ideas back to the mission of higher education, stating that “if AI can now do in minutes what once took months of grant‑funded research, the role of the university has to evolve.”

 

Left to right: Gary "Hoov" Hoover, Jon Krohn and John Renne
Left to right: The Murphy Institute Executive Director Gary "Hoov" Hoover, Jon Krohn, and John Renne.

 

On day two, programming continued on Tulane’s Uptown campus, where researchers presented work on emerging AI methodologies, governance frameworks, and interdisciplinary applications. 

 

Looking Ahead

The Murphy–Tulane Research Symposium remains a free and open‑to‑the‑public forum, dedicated to exploring how research, policy, and practice intersect in shaping political economy. As the AI: From Code to Consequences symposium made clear, guiding AI toward a constructive future will require not only technical innovation, but sustained ethical reflection, institutional leadership, and public engagement.

.