October 8, 2025

In a striking revelation, Law Professor Chris Rudge from Sydney Law School uncovered numerous errors and fabricated references in a report prepared by Deloitte, which was commissioned by the Australian government for a hefty sum of 440,000 Australian dollars ($290,000). This report focused on the implementation of automated penalties within Australia's welfare system and was initially accepted by the Department of Employment and Workplace Relations.
Professor Rudge’s scrutiny began when he encountered a suspicious citation linked to his colleague, Lisa Burton Crawford. The reference to a non-existent book immediately raised red flags. “I instantaneously knew it was either hallucinated by AI or the world’s best kept secret because I’d never heard of the book and it sounded preposterous,” Rudge explained.
Further examination led to the discovery of around twenty errors, including completely fabricated caselaw and fictitious quotes attributed to judges. Such inaccuracies, according to Rudge, transcended mere academic oversight and ventured into the realm of legal misrepresentation. “They’ve totally misquoted a court case then made up a quotation from a judge,” Rudge stated, emphasizing the gravity of presenting falsified legal interpretations to government officials.
Following these allegations, Deloitte issued a revised version of the report, admitting to the use of Azure OpenAI and confirming inaccuracies in some footnotes and references. The firm announced that the core recommendations and substance of the report remained intact despite these corrections. Additionally, Deloitte agreed to refund part of the payment made by the Australian government, declaring the issue as "resolved directly with the client."
However, the response from Deloitte has not quelled all concerns. Australian Senator Barbara Pocock has called for a full refund, criticizing the consultancy’s handling of AI tools. “Deloitte misused AI and used it very inappropriately: misquoted a judge, used references that are non-existent," Pocock remarked, likening the errors to those that would severely reprimand a university freshman.
This incident has sparked a broader discussion on the ethical use of AI in professional services and the checks and balances necessary to ensure accuracy in publicly commissioned reports. As the debate unfolds, the integrity of AI-generated content in critical governmental documentation remains under scrutiny.