Michael D. Cohen, the onetime fixer for former President Donald J. Trump, has revealed that he accidentally provided his lawyer with inaccurate legal citations from the artificial intelligence program Google Bard. The false citations were included in a motion submitted to federal judge Jesse M. Furman. Cohen, who pleaded guilty to campaign finance violations in 2018, had asked for an early end to court supervision after his release from prison.
Cohen admitted in a sworn declaration that he didn’t keep up with emerging legal technology and didn’t realize that Google Bard was a generative text service. His lawyer, David M. Schwartz, used these bogus citations in the motion without verifying their accuracy. Mr. Cohen claimed that he thought of Bard as a “supercharged search engine” and had previously used it to find accurate legal information online.
The mishap has raised concerns about how emerging legal technologies such as Bard are altering the practice of law. Similar to other cases, it appears that Mr. Cohen relied on inaccurate information generated by an AI program. Unlike his lawyer, Mr. Cohen was not aware that the cases cited were not real.
The situation came to light when Judge Furman was unable to find any of the three cited cases. This led to further scrutiny of Mr. Cohen’s actions and the repercussions it may have on cases where he’s expected to be called as a witness. Meanwhile, lawyers for Mr. Trump seized on the revelation as a further indictment of Mr. Cohen’s character but Mr. Cohen’s current lawyer has defended him, stating that Mr. Cohen relied on his attorney and that the mistake was an honest one.
This incident is part of a wider trend in the legal profession, where lawyers may incorrectly rely on chatbots to conduct legal research and incorporate the resulting information into court filings. Despite the potential for assistance, lawyers must exercise caution and diligence when relying on AI-generated information.