When attorneys use artificial intelligence to write legal documents, they can wind up introducing evidence in court cases that is not true—sometimes deliberately and sometimes accidentally.
Six members of the Maryland justice system in April agreed that the negative consequences of using ChatGPT or another AI text generator for legal research and presentations far outweigh the positives.
“When people show up in court in a variety of different situations with what they contend is evidence, it’s difficult for us to discern … whether these things are accurate or truthful,” E. Gregory Wells, chief judge of the Appellate Court of Maryland, said. “And so we rely on everybody not to be deceptive, but to tell us the truth. And I think that’s really a part of what we’re concerned about.”
The Annual Forum on the Judiciary, hosted by AACC’s Legal Studies Institute, featured four judges: Wells, Elizabeth S. Morris, Julie Rubin and John P. Morrissey. The panel also included Sandra F. Howell, a magistrate in the Circuit Court for Anne Arundel County, and Justice Jonathan Biran of the Supreme Court of Maryland.
Mary Bachkosky, a legal studies professor who facilitated the panel, said AI is increasingly common in many professions, including the legal field, which “is a particularly interesting topic, because evidence needs to be brought to trial. And it’s concerning that evidence could be [AI-generated].”
The panel discussed not only the legality of generating court documents using AI, but the ethics of it.
Morris said judges and attorneys are responsible for verifying the truthfulness of what evidence is presented in court.
“When you present something to the court … judges rely on that information,” Morris said.