Governing AI for a Safe(r) Future

CLAIR supports research in Law & AI safety. We are building a field of legal scholars working to understand how law can reduce catastrophic and existential risk from advanced artificial intelligence.

CLAIR's mission is to help build Law and AI Safety as a scholarly field. We believe that law has a distinct role to play in ensuring that powerful frontier AI systems are developed in a safe, responsible manner. There are many open questions about how. We hope to support legal thinkers both inside and outside the academy in answering them.

Leadership

Co-Directors

Yonathan Arbel

Co-Director

Yonathan Arbel

University of Alabama School of Law

Peter Salib

Co-Director

Peter Salib

University of Houston Law Center; Law & Policy Advisor, Center for AI Safety

Scholarship

Selected Research

Recent scholarly work on Law and AI Safety

Yonathan Arbel

Systemic Regulation of Artificial Intelligence, Arizona State Law Journal (forthcoming). Read →

Peter Salib

AI Rights for Human Safety, Virginia Law Review (with Simon Goldstein) (forthcoming). Read →
AI Outputs Are Not Protected Speech, 101 Washington University Law Review (forthcoming). Read →
AI Will Not Want to Self-Improve, Lawfare Digital Social Contract Whitepapers (2024). Read →
View all research →

What We Do

Recent Activity

Feb 20–23, 2026

CLAIR Writers' Retreat

A retreat format designed for actual writing progress: structured roundtables, long writing blocks, and space for collaboration, plus outdoor activities and shared meals to build a durable research network.

Retreat agenda and format → CLAIR Writers' Retreat participants
Sept 22, 2025

CLAIR at Harvard Law School

CLAIR co-directors gave a student-facing talk hosted by the Harvard Law AI Student Association and the AI Safety Student Team, focusing on legal research and real governance problems with time for Q&A and concrete entry points for students.

Talks and student programming →
April 25–26, 2025

Inaugural Roundtable on AI Safety Law

A two-day scholarly roundtable at the University of Alabama School of Law convening legal academics and researchers working on governance for catastrophic and existential AI risk. The program spanned foundations, liability, alignment, litigation, rights, and international governance.

Read the program and themes →

Photos and short participant writeups will be added after each event.

For Students

CLAIR hosts events for students in law and related fields interested in Law & AI Safety, including LunchGPT — a program to democratize legal inquiry into AI risk. Learn more →