May 6, 2026

If you're a lawyer in Pennsylvania checking on whether to disclose your AI tools in court documents, be cautious about relying on Google's AI summaries for advice. Despite its confident tone, Google's AI has been delivering incorrect information about state ethics rules.
Christine Lemmer-Webber wittily describes this phenomenon as "Mansplaining as a Service" — a blend of confidence and inaccuracy. For instance, a search for "Pennsylvania AI disclosure lawyers" might lead you to a Google AI-generated summary stating that by August 2024, Pennsylvania will require explicit AI disclosure in all court submissions. This statement isn't just misleading; it's flat-out wrong.
The Legal AI Governance tracker, a crucial resource managed by Brian Alenduff, clarifies that Pennsylvania has no such statewide rule. Some courts have their specific orders, and the state Supreme Court has guidelines affecting court personnel only. Additionally, the Pennsylvania and Philadelphia Bar Associations have released a joint advisory opinion in 2024, emphasizing AI competence under existing rules, but it's advisory, not binding.
This misinformation seems to originate from a blog post on Paxton.ai, which incorrectly claimed that Pennsylvania mandates AI disclosures. This post appears to have been a primary source for Google's AI, leading to the propagation of this error. Subsequently, another site, Twinladder.ai, repeated the misinformation, likely reinforcing Google's error through a feedback loop.
This scenario exemplifies the broader issue with AI in legal contexts. As journalist Thomas Germain demonstrated with his satirical claim of being a hot-dog-eating champion, which AI systems then parroted, AI can perpetuate and amplify inaccuracies if not carefully monitored.
The problem extends beyond mere technical errors. AI's capacity to absorb and regurgitate false information can lead to significant consequences in fields like law, where precision and accuracy are paramount. With AI increasingly used to streamline legal processes, there's a risk that lawyers may over-rely on these tools without sufficient scrutiny, potentially leading to misguided legal decisions.
Thus, while AI can guide and provide initial insights, it's crucial for legal professionals to approach AI outputs with skepticism and verify information independently. The allure of fast, authoritative-sounding AI summaries should not replace thorough, human-led research. After all, as the ongoing discussions around AI and ethics suggest, technology should aid the legal process, not undermine it by accelerating the spread of misinformation.