April 20, 2026


Legal Eagles Beware: The Hidden Risks of Using ChatGPT in Law Practice

In today's fast-paced legal environment, the allure of quick answers via generative AI tools like ChatGPT is undeniable. However, the convenience of these tools comes with substantial risks that many in the legal profession are overlooking.

The primary concern lies in the potential for inadvertent disclosure of confidential client information. Lawyers, under pressure and facing tight deadlines, may input sensitive data into ChatGPT without adequate safeguards, assuming that privacy settings like "do not train" or temporary chats will protect client confidentiality. Unfortunately, these measures fall short of the stringent standards required to safeguard such information.

The use of public AI tools poses significant ethical challenges. For instance, these tools generally lack a contractual commitment to maintain the confidentiality of the data inputted into them. Furthermore, despite not using the data to train their models, most AI tools retain user interactions for a period—ChatGPT retains data for 30 days—which could potentially be accessed or monitored. This retention and the control over the servers and infrastructure by AI providers mean that the data is never truly in the lawyer's control once entered into the system.

From an ethical standpoint, the American Bar Association's Model Rule 1.6(c) mandates lawyers to make reasonable efforts to prevent unauthorized access to information related to client representation. The use of AI tools without specific protective measures does not typically meet this requirement. There is also the practical concern of maintaining attorney-client and work product privileges, which could be jeopardized when confidential materials are not adequately protected from potential discovery.

The risks are compounded by a decrease in awareness and vigilance over time. Early warnings about the dangers of placing sensitive material in the hands of AI have been overshadowed by other issues, such as the accuracy and reliability of AI responses. This shift in focus might lead to a dangerous complacency in handling client information.

To mitigate these risks, lawyers should follow stringent guidelines when using AI tools. This includes avoiding the inclusion of any client-specific information, using hypothetical scenarios that do not allow third parties to infer the identity of the client, and always considering the potential discovery-related risks involved. A simple yet effective rule of thumb is the New York Times test: if the information inputted into an AI tool were published, would it compromise client confidentiality?

In conclusion, while AI tools like ChatGPT can offer significant benefits in terms of efficiency and accessibility of information, lawyers must tread carefully. They must not rely solely on the tool's privacy settings to fulfill their ethical responsibilities towards their clients. Instead, they should seek more robust protections and remain vigilant to ensure that their reliance on technology does not lead to ethical lapses or breaches of client confidentiality. As always, when in doubt regarding the security of client information, err on the side of caution. Let's be careful out there.