ADVERTISEMENT

Michael Cohen used fake cases created by AI in bid to end his probation

cigaretteman

HB King
May 29, 2001
78,136
59,967
113
Michael D. Cohen, a former fixer and lawyer for former President Donald Trump, said in a new court filing that he unknowingly gave his attorney bogus case citations after using artificial intelligence to create them as part of a legal bid to end his probation on tax evasion and campaign finance violation charges.


Tech is not your friend. We are. Sign up for The Tech Friend newsletter.

According to the filing, which was unsealed Friday, Cohen said he used Google Bard, an AI chatbot, to generate case citations that his lawyer could use to assist in making the case to shorten his supervised release. He pleaded guilty to the crimes in 2018 and had served time in prison.
Cohen said he gave those citations to one of his attorney’s, David M. Schwartz, who then used them in a motion filed with a U.S. federal judge on Cohen’s behalf, the filing said.

Cohen’s admission comes after U.S. District Judge Jesse Furman of the Southern District of New York said in a Dec. 12 order that he could not find any of the three cases cited by Schwartz and asked for a “thorough explanation” of how these cases came to be included and “what role, if any” Cohen may have played in the motion before it was filed.


In the filing, Cohen wrote that he had not kept up with “emerging trends (and related risks) in legal technology and did not realize that Google Bard was a generative text service that, like Chat-GPT, could show citations and descriptions that looked real but actually were not.” To him, he said, Google Bard seemed to be a “supercharged search engine.”

Cohen added that at no point did Schwartz or his paralegal “raise any concerns about the citations” he’d suggested. “It did not occur to me then — and remains surprising to me now — that Mr. Schwartz would drop the cases into his submission wholesale without even confirming they had existed,” Cohen wrote.

Schwartz did not immediately return a request for comment.
The episode comes as Cohen is expected to play a prominent role in a Manhattan criminal case against Trump. It is also an indication of how common AI is becoming in legal case work, as a new generation of AI language tools make their way into the legal industry.


According to Cohen’s filing, the mistake was caught by E. Danya Perry, a former federal prosecutor who is now representing Cohen in his effort to cut short his probation. Cohen said Schwartz made an “honest mistake,” and Perry has provided real case citations that make the case for why Cohen’s probation should be terminated.
This is at least the second instance this year in which a Manhattan federal judge has confronted lawyers over using fake AI-generated citations. Two lawyers in June were fined $5,000 in an unrelated case where they used ChatGPT to create bogus case citations.
 
There is a reason Trump hired him, and it wasn't because he was a super good lawyer. He was willing to go into dark places for Donald, and do the dirty things.
 

ChatGPT invented a sexual harassment scandal and named a real law prof as the accused​

The AI chatbot can misrepresent key facts with great flourish, even citing a fake Washington Post article as evidence​

By Pranshu Verma and Will Oremus
April 5, 2023 at 2:07 p.m. EDT

One night last week, the law professor Jonathan Turley got a troubling email. As part of a research study, a fellow lawyer in California had asked the AI chatbot ChatGPT to generate a list of legal scholars who had sexually harassed someone. Turley’s name was on the list.

The chatbot, created by OpenAI, said Turley had made sexually suggestive comments and attempted to touch a student while on a class trip to Alaska, citing a March 2018 article in The Washington Post as the source of the information. The problem: No such article existed. There had never been a class trip to Alaska. And Turley said he’d never been accused of harassing a student.

A regular commentator in the media, Turley had sometimes asked for corrections in news stories. But this time, there was no journalist or editor to call — and no way to correct the record.

“It was quite chilling,” he said in an interview with The Post. “An allegation of this kind is incredibly harmful.”

Turley’s experience is a case study in the pitfalls of the latest wave of language bots, which have captured mainstream attention with their ability to write computer code, craft poems and hold eerily humanlike conversations. But this creativity can also be an engine for erroneous claims; the models can misrepresent key facts with great flourish, even fabricating primary sources to back up their claims.



Today’s AI chatbots work by drawing on vast pools of online content, often scraped from sources such as Wikipedia and Reddit, to stitch together plausible-sounding responses to almost any question. They’re trained to identify patterns of words and ideas to stay on topic as they generate sentences, paragraphs and even whole essays that may resemble material published online.
 
Michael D. Cohen, a former fixer and lawyer for former President Donald Trump, said in a new court filing that he unknowingly gave his attorney bogus case citations after using artificial intelligence to create them as part of a legal bid to end his probation on tax evasion and campaign finance violation charges.


Tech is not your friend. We are. Sign up for The Tech Friend newsletter.

According to the filing, which was unsealed Friday, Cohen said he used Google Bard, an AI chatbot, to generate case citations that his lawyer could use to assist in making the case to shorten his supervised release. He pleaded guilty to the crimes in 2018 and had served time in prison.
Cohen said he gave those citations to one of his attorney’s, David M. Schwartz, who then used them in a motion filed with a U.S. federal judge on Cohen’s behalf, the filing said.

Cohen’s admission comes after U.S. District Judge Jesse Furman of the Southern District of New York said in a Dec. 12 order that he could not find any of the three cases cited by Schwartz and asked for a “thorough explanation” of how these cases came to be included and “what role, if any” Cohen may have played in the motion before it was filed.


In the filing, Cohen wrote that he had not kept up with “emerging trends (and related risks) in legal technology and did not realize that Google Bard was a generative text service that, like Chat-GPT, could show citations and descriptions that looked real but actually were not.” To him, he said, Google Bard seemed to be a “supercharged search engine.”

Cohen added that at no point did Schwartz or his paralegal “raise any concerns about the citations” he’d suggested. “It did not occur to me then — and remains surprising to me now — that Mr. Schwartz would drop the cases into his submission wholesale without even confirming they had existed,” Cohen wrote.

Schwartz did not immediately return a request for comment.
The episode comes as Cohen is expected to play a prominent role in a Manhattan criminal case against Trump. It is also an indication of how common AI is becoming in legal case work, as a new generation of AI language tools make their way into the legal industry.


According to Cohen’s filing, the mistake was caught by E. Danya Perry, a former federal prosecutor who is now representing Cohen in his effort to cut short his probation. Cohen said Schwartz made an “honest mistake,” and Perry has provided real case citations that make the case for why Cohen’s probation should be terminated.
This is at least the second instance this year in which a Manhattan federal judge has confronted lawyers over using fake AI-generated citations. Two lawyers in June were fined $5,000 in an unrelated case where they used ChatGPT to create bogus case citations.
Season 7 Oops GIF by Workaholics
 
ADVERTISEMENT

Latest posts

ADVERTISEMENT