ADVERTISEMENT

Lawyer cited six fake cases made up by ChatGPT; judge calls it “unprecedented”...

The Tradition

HR King
Apr 23, 2002
123,521
97,131
113
A lawyer is in trouble after admitting he used ChatGPT to help write court filings that cited six nonexistent cases invented by the artificial intelligence tool.

Lawyer Steven Schwartz of the firm Levidow, Levidow, & Oberman "greatly regrets having utilized generative artificial intelligence to supplement the legal research performed herein and will never do so in the future without absolute verification of its authenticity," Schwartz wrote in an affidavit on May 24 regarding the bogus citations previously submitted in US District Court for the Southern District of New York.

Schwartz wrote that "the use of generative artificial intelligence has evolved within law firms" and that he "consulted the artificial intelligence website ChatGPT in order to supplement the legal research performed."

The "citations and opinions in question were provided by ChatGPT which also provided its legal source and assured the reliability of its content," he wrote. Schwartz admitted that he "relied on the legal opinions provided to him by a source that has revealed itself to be unreliable," and stated that it is his fault for not confirming the sources provided by ChatGPT.

Schwartz didn't previously consider the possibility that an artificial intelligence tool like ChatGPT could provide false information, even though AI chatbot mistakes have been extensively reported by non-artificial intelligence such as the human journalists employed by reputable news organizations. The lawyer's affidavit said he had "never utilized ChatGPT as a source for conducting legal research prior to this occurrence and therefore was unaware of the possibility that its content could be false."

Federal Judge Kevin Castel is considering punishments for Schwartz and his associates. In an order on Friday, Castel scheduled a June 8 hearing at which Schwartz, fellow attorney Peter LoDuca, and the law firm must show cause for why they should not be sanctioned.

"The Court is presented with an unprecedented circumstance," Castel wrote in a previous order on May 4. "A submission filed by plaintiff's counsel in opposition to a motion to dismiss is replete with citations to non-existent cases... Six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations."

The filings included not only names of made-up cases but also a series of exhibits with "excerpts" from the bogus decisions. For example, the fake Varghese v. China Southern Airlines opinion cited several precedents that don't exist.

"The bogus 'Varghese' decision contains internal citations and quotes, which, in turn, are nonexistent," Castel wrote. Five other "decisions submitted by plaintiff's counsel contain similar deficiencies and appear to be fake as well," Castel wrote.

The other five bogus cases were called Shaboon v. Egyptair, Petersen v. Iran Air, Martinez v. Delta Airlines, Estate of Durden v. KLM Royal Dutch Airlines, and Miller v. United Airlines.

Schwartz provided an excerpt from ChatGPT queries in which he asked the AI tool whether Varghese is a real case. ChatGPT answered that it "is a real case" and "can be found on legal research databases such as Westlaw and LexisNexis." When asked if the other cases provided by ChatGPT are fake, it answered, "No, the other cases I provided are real and can be found in reputable legal databases such as LexisNexis and Westlaw."

The case that wasn't made up by an AI chatbot is Roberto Mata vs. Avianca. Mata is seeking damages for injuries suffered during an Avianca flight from El Salvador to New York in August 2019 when a metal snack and drink cart struck his knee.

Mata's lawsuit against the airline was originally filed in a New York state court, but Avianca had it moved to the federal court. Schwartz was handling the case from the beginning but wasn't admitted to practice in the Southern District of New York federal court. The legal team decided to have Schwartz continue doing the legal work while LoDuca handled document filing because he was admitted to the federal court.

On March 1, LoDuca cited the fake cases in a brief that opposed Avianca's motion to dismiss the case. Apparently written by Schwartz despite having LoDuca's name on it, the brief used the bogus citations to argue that the case should be heard in a New York state court where a three-year statute of limitations would apply.

"In Shaboon v. Egyptair, the Illinois Appellate Court held that state courts have concurrent jurisdiction over claims arising out of an international airline accident under the Montreal Convention, and that the plaintiff was not required to bring their claim in federal court," the brief said, citing one of the bogus cases.

Confusion ensued as no one else could find the cited cases. Castel issued an order on April 11 saying that LoDuca "shall file an affidavit annexing copies of the following cases cited in his submission to the court," and that "failure to comply will result in dismissal of this action."

The plaintiff's lawyer continued to insist that the cases were real. LoDuca filed an affidavit on April 25 in which he swore to the authenticity of the fake cases, including one—Shaboon v. Egyptair—that he claimed was "an unpublished opinion." One day after the filing, Avianca's legal team wrote that "the authenticity of many of these cases is questionable." Since the cases don't exist, the defendant's lawyers were unable to find them using tools such as Westlaw, PACER, and Lexis Courtlink.

In the order issued on Friday last week, Castel said that Schwartz may be sanctioned for "the citation of non-existent cases to the Court," "the submission to the Court of copies of non-existent judicial opinions," and "the use of a false and fraudulent notarization." Schwartz may also be referred to an attorney grievance committee for additional punishment.

Castel wrote that LoDuca may be sanctioned "for the use of a false and fraudulent notarization in his affidavit filed on April 25, 2023." The law firm could be sanctioned for "the citation of non-existent cases to the Court," "the submission to the Court of copies of non-existent judicial opinions annexed to the Affidavit filed on April 25, 2023," and "the use of a false and fraudulent notarization in the affidavit filed on April 25, 2023."

Schwartz's affidavit said that LoDuca "had no role" in performing the faulty research. Schwartz said he had no intent to deceive the court or the defendant and that he and LoDuca have never "been cited for any legal misconduct of any kind nor ever been sanctioned by this Court or any Court in over thirty years of practice."

LoDuca wrote in his own affidavit that he "did not personally conduct any of the legal research" and did not have any "personal knowledge of how the [research] was conducted." LoDuca said he has worked with Schwartz for over 25 years and "had no reason to doubt the authenticity of the case law" that Schwartz provided. LoDuca asked the judge to avoid issuing sanctions against him because "there was no bad faith nor intent to deceive either the Court or the defendant."

 
People need to stop listening to these Silicone Valley people when they call tech reporters like "our AI is so good it's going to destroy humanity"
 
  • Haha
Reactions: DolphLundgren
The lawyer. I can't believe they wouldn't have at least fact checked the generated text. For a friggin' court case. I can't imagine being so negligent.

They're both lawyers involved in this massive screwup. Schwartz shouldn't have relied on AI and LoDuca shouldn't have relied on Schwartz.
 
  • Like
Reactions: Colonoscopy
They're both lawyers involved in this massive screwup. Schwartz shouldn't have relied on AI and LoDuca shouldn't have relied on Schwartz.
Maybe you could make use of AI generated stuff... but you really would have to do a good job as an editor. I've seen no evidence you can just default to any of these AI solutions for really important work. They're just tools that if used correctly can boost your productivity. It's like relying up an early self-driving car model to get everything right. We're not that far along.
 
ADVERTISEMENT
ADVERTISEMENT