July 23, 2025


Legal Ethics in the AI Era: Partner Fired After Citing Nonexistent AI-Generated Cases

In an ironic twist that reads like a cautionary tale, Danielle Malaty, a former partner at Goldberg Segalla, was terminated for her role in using an artificial intelligence tool to generate fictitious legal citations. Malaty, who had previously authored an article on the ethical considerations of AI in law, found herself at the center of a contentious legal debacle involving a $24 million verdict against the Chicago Housing Authority (CHA) over lead paint exposure.

The controversy began when Malaty included a citation for a non-existent Illinois Supreme Court case, *Mack v. Anderson*, in a legal filing. The case, purportedly perfect for CHA's argument, was fabricated by the AI tool ChatGPT. This misuse of AI came to light during a court hearing where Malaty admitted her oversight in verifying the AI-generated citation, an error compounded by the reviews of three other attorneys at Goldberg Segalla, including the final reviewer, Larry Mason.

Goldberg Segalla, which had a strict policy against the use of AI for legal drafting, conducted a thorough investigation following the incident. The firm concluded that Malaty had directly violated this policy by not only using AI but also failing to confirm the authenticity of the information it produced. This lapse led to her dismissal from the partnership.

The implications of this case extend beyond the immediate legal team involved. It highlights a growing ethical quandary in the legal profession concerning the use of AI tools. While these technologies can enhance efficiency, they also pose significant risks if not used responsibly. The incident serves as a stark reminder of the importance of rigorous verification processes, especially as AI becomes increasingly embedded in legal practices.

Malaty's prior work on AI ethics, which focused more on job displacement and bias rather than the potential for generating inaccurate legal data, seems almost prescient now in its omission of this critical issue. Meanwhile, CHA continues to challenge the $24 million verdict, seeking either a favorable judgment or a new trial, leaving the legal community to ponder the proper role of AI in law and the safeguards needed to prevent such errors in the future.

This case not only questions the reliability of AI in critical legal processes but also underscores the need for a nuanced approach to technology adoption in law practices. As the legal industry grapples with these technological advancements, the balance between innovation and ethical responsibility remains a pivotal concern.