Nov 22 (Reuters) – A federal appeals court in New Orleans is proposing requiring lawyers to certify that they either did not rely on artificial intelligence programs to draft briefs or that humans reviewed the accuracy of any text generated by AI in their court filings.
The 5th U.S. Circuit Court of Appeals in a notice late Tuesday unveiled what appears to be the first proposed rule by any of the nation’s 13 federal appeals courts aimed at regulating the use of generative AI tools like OpenAI’s ChatGPT by lawyers appearing before it.
The proposed rule would govern lawyers and litigants appearing before the court without counsel and would require them to certify that, to the extent an AI program was used to generate a filing, citations and legal analysis were reviewed for accuracy.
Lawyers who misrepresent their compliance with the rule could face the prospect of their filings being stricken and sanctions, according to the proposed rule. The 5th Circuit is accepting public comment on the proposal through Jan. 4.
Lyle Cayce, the 5th Circuit’s clerk of court, in an email said the court recognized that attorneys and pro se litigants “would likely utilize AI in the future, and seeks public comments on the proposed rule addressing such use.”
The proposed rule came as judges nationally grapple with the rapid rise of generative artificial intelligence programs like ChatGPT and explore the need for safeguards for the use of the evolving technology in their courtrooms.
The pitfalls of lawyers using AI burst into the headlines in June, when two New York lawyers were sanctioned for submitting a legal brief that included six fictitious case citations generated by ChatGPT.
The 5th Circuit’s proposal followed the adoption of similar local rules and policies by some courts in its jurisdiction.
U.S. District Judge Brantley Starr of the Northern District of Texas in June became one of the first federal judges nationally to require lawyers to certify they did not use AI to draft their filings without a human checking their accuracy.
The U.S. District Court for the Eastern District of Texas in October announced a rule that goes into effect Dec. 1 that requires lawyers using AI programs to “review and verify any computer-generated content.”
The court in notes accompanying the rule change said “often the product of those tools may be factually or legally inaccurate,” and that AI technology “is never a replacement for abstract thought and problem solving” by lawyers.
Read more:
More judges, lawyers confront pitfalls of artificial intelligence
US judge orders lawyers to sign AI pledge, warning chatbots ‘make stuff up’
New York lawyers sanctioned for using fake ChatGPT cases in legal brief
Get the latest legal news of the day delivered straight to your inbox with The Afternoon Docket.
Our Standards: The Thomson Reuters Trust Principles.