CLAIR supports research in Law & AI safety. We are building a field of legal scholars working to understand how law can reduce catastrophic and existential risk from advanced artificial intelligence.
CLAIR's mission is to help build Law and AI Safety as a scholarly field. We believe that law has a distinct role to play in ensuring that powerful frontier AI systems are developed in a safe, responsible manner. There are many open questions about how. We hope to support legal thinkers both inside and outside the academy in answering them.
Leadership
Co-Director
Yonathan Arbel
University of Alabama School of Law
Co-Director
Peter Salib
University of Houston Law Center; Law & Policy Advisor, Center for AI Safety
Scholarship
Recent scholarly work on Law and AI Safety
What We Do
A retreat format designed for actual writing progress: structured roundtables, long writing blocks, and space for collaboration, plus outdoor activities and shared meals to build a durable research network.
Retreat agenda and format →
CLAIR at Harvard Law School
CLAIR co-directors gave a student-facing talk hosted by the Harvard Law AI Student Association and the AI Safety Student Team, focusing on legal research and real governance problems with time for Q&A and concrete entry points for students.
Talks and student programming →Inaugural Roundtable on AI Safety Law
A two-day scholarly roundtable at the University of Alabama School of Law convening legal academics and researchers working on governance for catastrophic and existential AI risk. The program spanned foundations, liability, alignment, litigation, rights, and international governance.
Read the program and themes →Photos and short participant writeups will be added after each event.
CLAIR hosts events for students in law and related fields interested in Law & AI Safety, including LunchGPT — a program to democratize legal inquiry into AI risk. Learn more →