November 24, 2025

In a startling revelation, it appears that the Immigration and Customs Enforcement (ICE) has resorted to using the AI chatbot, ChatGPT, to draft use-of-force reports, a critical component of law enforcement documentation that is meant to be meticulously accurate. This development was highlighted in a comprehensive 233-page opinion issued by Judge Sara Ellis, which scrutinizes the agency's operational tactics under the current administration.
Judge Ellis’s findings paint a grim picture of a law enforcement body cutting corners in ways that could potentially distort the truth in official reports. The instances cited include agents who reportedly instructed ChatGPT to generate narratives from minimal input, such as a single sentence or a few images. This has raised significant concerns about the integrity and reliability of such reports, which are integral in judicial proceedings and public accountability.
The use of AI in drafting reports where exactness is paramount is controversial and problematic. According to the documents reviewed, agents have at times provided the AI with scant information, which could lead to fabricated details being included in official documents. This practice not only undermines the credibility of the agents but also poses serious questions about constitutional rights and the justice system's dependency on accurate data.
Moreover, Judge Ellis’s opinion exposes several discrepancies between the reports generated by ChatGPT and actual body-cam footage. In one noted instance, agents claimed protesters had used bicycles and shields with nails as weapons, which the footage clearly contradicted. Furthermore, racial profiling was evident, as agents misidentified individuals based on clothing colors not associated with any criminal groups.
This misuse of AI technology by ICE agents reflects a broader issue within the agency's culture and the administration's approach to law enforcement and immigration control. The administration's reliance on AI for such sensitive tasks is indicative of a deeper malaise—a disregard for diligent, ethical governance and a preference for expedient, albeit unreliable, technological solutions.
As the debate over AI's role in society continues, this incident underscores the dangers of deploying AI systems in contexts where they are ill-suited, especially without proper oversight and ethical guidelines. The misuse of AI in law enforcement, as demonstrated by ICE, exemplifies a hazardous overreliance on technology that could have severe implications for civil liberties and the credibility of legal institutions.
While ICE has not publicly responded to these findings, the implications of this revelation are far-reaching, prompting a reevaluation of the use of AI in governmental processes. This situation serves as a cautionary tale about the potential misuse of AI in critical public sector operations, emphasizing the need for stringent checks and balances where the use of technology is concerned.