July 29, 2025

In a startling revelation that sounds more like legal fiction than reality, the case of Jordan v. Chicago Housing Authority has unearthed a deep-seated issue within the use of AI in legal research. The plaintiffs have filed a motion for sanctions after discovering that the defense, represented by Goldberg Segalla, repeatedly cited non-existent cases, including the now-infamous Mack v. Anderson. This initial discovery has led to a broader investigation revealing a troubling pattern of misinformation.
The issue came to light when Goldberg Segalla submitted a post-trial motion that erroneously included AI-generated cases. This prompted the plaintiffs to conduct a thorough review of other filings by the firm. What they found was a litany of false citations and fabricated case law, suggesting a systemic reliance on flawed AI outputs in legal pleadings.
The gravity of the situation is underscored by the case itself, which involves a $24.1 million verdict in favor of two children who suffered irreversible brain damage from lead poisoning. Instead of accepting the verdict, the defense sought to overturn it, employing AI tools that generated fictitious legal precedents.
The plaintiffs' legal team has now provided a table of these errors, which includes numerous faulty case citations and invented case quotations, with the fictitious Mack v. Anderson case appearing multiple times. Astonishingly, these errors were not limited to critical filings; even a mundane motion for an extension contained citations to non-existent authorities.
This pattern of misconduct raises serious questions about the oversight and verification processes at Goldberg Segalla. While the firm had previously severed ties with partner Danielle Malaty, who was directly linked to the initial fake citation, the ongoing issues suggest a broader problem within the firm's handling of AI tools.
Legal experts and the judiciary are now grappling with the implications of AI in legal research, where reliance on unverified AI-generated information can lead to significant judicial errors and ethical breaches. This case may serve as a critical juncture for reevaluating the integration of AI technologies in legal practices, ensuring they are used responsibly and verified rigorously.
As the legal community watches closely, the unfolding scenario in Jordan v. Chicago Housing Authority may prompt a reassessment of AI's role in law and lead to stricter guidelines and standards to prevent such occurrences in the future. This case not only highlights the potential pitfalls of AI in legal settings but also serves as a cautionary tale about the importance of human oversight in the use of new technologies.