A project of the Conference of Chief Justices and the Conference of State Court Administrators, the RRT will develop a current landscape of court orders, rules, guidance and other initiatives of the state court community or the federal courts regarding AI or generative AI. The team will also consider whether current court rules are adequate to address generative AI use and assess the need to develop model rules or guidelines on practice or procedure for state courts to consider with respect to disclosure, transparency, accuracy, authenticity, confidentiality, and certification of generative AI use in court documents and proceedings.
RRT Publications
- Artificial Intelligence Guidance for Use of AI and Generative AI in Courts, August 2024
This document is intended to help get courts started on their GenAI journey. State Court leaders are encouraged, if they have not already done so, to establish an internal work group to examine the impact of AI and GenAI on their courts and establish a plan moving forward.
- Preparing Your Court for AI: Eight Steps for Success, August 2024
- Interim Guidance: Deepfakes, June 2024
This infographic outlines the eight steps that courts can take to successfully implement AI and contains links to helpful resources.
The advancement in AI tools makes it easier and cheaper to enhance digital evidence and create deepfakes (convincing false pictures, videos, audio, and other digital information), causing evidentiary challenges in court proceedings. This guidance highlights how deepfakes can affect court proceedings, and whether current evidentiary rules are sufficient to address deepfakes.
- Interim Guidance: Judicial and Legal Ethics Issues, May 2024
As courts continue to adopt and experiment with AI tools, they need to anticipate the ethical issues that arise from the use of these technologies. This guidance provides important ways that principles in the Model Code of Judicial Conduct (MCJC) and the Model Rules of Professional Conduct (MRPC) for lawyers are implicated when AI is used in the courts.
- Interim Guidance: Developing an Internal Use Policy, April 2024
For courts, developing an AI policy is crucial to ensuring the responsible and ethical use of AI. This guidance stresses the importance of establishing a governance working group, conducting self-assessments, and developing policies.
- Interim Guidance: Platform Considerations, March 2024
- Interim Guidance: Getting Started, March 2024
- Interim Guidance: Talking Points, February 2024
As courts continue to adopt and experiment with AI tools, leaders must understand how these technologies utilize information and data. The guidance on Platform Considerations recommends evaluating the use of information and data by understanding new AI terms and conditions, considering data governance issues, and adopting a team-based approach.
To understand and ultimately benefit from generative AI technologies, courts should consider experimenting with AI tools in ways that minimize risk and maximize learning. The Getting Started guidance suggests courts should experiment with low-risk tasks using public data, employ a "human-in-the-loop" approach, ensure permission and understand the terms of use, and provide training to judges and court staff.
Courts must proactively address the efficient, effective, and ethical use of AI to promote the administration of justice. The Talking Points guidance highlights challenges and concerns, including erosion of public trust, the need for ethical guidelines, and education on deepfakes.
AI Rapid Response Team Members
- Chief Judge Anna Blackburne-Rigsby (DC), Co-Chair
- Chief Justice Michael P. Boggs (GA)
- Chief Justice Matthew Fader (MD)
- Justin Forkner (IN), Co-Chair
- Stacey Marz (AK)
- Sara Omundson (ID)
- Justice Beth Walker (WV)
- Judge Joseph Zayas (NY)
- NCSC staff