Michael Cohen, the former lawyer of Donald Trump, admitted to using generative AI, specifically Google Bard, to produce fabricated case citations in a legal filing. The revelation came as part of Cohen’s effort to end his probation related to tax evasion and campaign finance violation charges. Cohen, who is not a lawyer, claimed that he misunderstood Google Bard as a supercharged search engine and was unaware of its generative text capabilities. The AI-generated citations were utilized in a motion filed by Cohen’s attorney, David Schwartz, leading to a judge’s order questioning the nonexistent cases cited.
The court order revealed that the judge could not find any of the three cases cited by Schwartz and requested an explanation. In response, Cohen expressed his lack of awareness about the emerging trends and risks in legal technology, emphasizing that he believed Google Bard to be a reliable search engine. Interestingly, Cohen sought to distance himself from the situation by suggesting that Schwartz did not raise concerns about the citations and was surprised that his legal team included the nonexistent cases without verification. This incident underscores the growing role of AI, specifically generative language models like Google Bard and ChatGPT, in legal proceedings, raising ethical and accuracy concerns.
The use of AI in legal work has become a global trend, as demonstrated by a recent case in Germany where a regional court suspected law firms of leveraging AI to attract plaintiffs for mass proceedings. The incident involving Cohen is not isolated, with previous cases of lawyers fined for submitting briefs with fictitious case citations generated by ChatGPT. The legal industry is grappling with the integration of AI, necessitating increased awareness and vigilance to ensure the accuracy and ethical use of AI-generated content in legal documents.