Ethical Rules for Using Generative AI in Your Practice | Model Rule 1.6: Confidentiality

Steve Herman is special counsel in Fishman Haygood’s Litigation Section. He has served on the American Association for Justice (AAJ) Board of Governors since 2014 and currently serves as Chair of the AAJ’s AI Task Force. He also serves on the standing Louisiana State Bar Association (LSBA) Rules of Professional Conduct Committee and has given numerous presentations on the use of AI in the legal profession. In this biweekly series, he identifies ethical rules for generative AI usage in law practice. Read his previous analysis of ABA Model Rules 5.1 & 5.3: Supervision of Associates and Non-Lawyer Assistance here.


At the risk of stating the obvious, we are still in the early days of what we believe to be an “AI Revolution” in the way that goods and services, including legal services, are and will be provided. Which means that we do not, at this point, have much in the way of formal guidance.*

With that preface, in this series we will examine some of the Professional Rules[i] and other legal requirements that could potentially be implicated by a law firm’s use (or non-use) of ChatGPT or other Generative AI (GAI). Last time, we discussed the importance of establishing, periodically reviewing, and enforcing internal policies and protocols regarding the use—and/or limitation and restrictions on use—of ChatGPT and other AI products by lawyers and other employees at the firm. One reason for this precaution is the issue of confidentiality, which brings us to our fourth rule.

Model Rule 1.6: Confidentiality

Perhaps the most serious concerns that have been raised regarding the use of ChatGPT and other AI systems surround the security of privileged and other legally protected information. Under Model Rule 1.6, an attorney is not only generally prevented from disclosing “information relating to the representation of a client,” but is also charged with an affirmative duty to “make reasonable efforts to prevent the inadvertent or unauthorized disclosure of, or unauthorized access to, information relating to the representation of a client.”[ii]

Using ChatGPT to analyze a client’s legal documents that contain privileged or other confidential information can pose a risk that such information could be misused or exposed.[iii] Generative AI programs that are ‘self-learning’ continue to develop responses as they receive additional inputs, adding those inputs to their existing parameters. The use of these kinds of programs creates a risk that client information may be stored within the program and revealed in response to future inquiries by third parties.[iv]

In March of 2023, for example, there was a data leak at ChatGPT that allowed its users to view the chat history titles of other users.[v] Outside of such data breaches, chat history can be accessed and reviewed by ChatGPT or other Generative AI company employees and may also be provided to third-party vendors and affiliates.[vi]

In addition to attorney-client privileged information and/or work product, one also must be cognizant of other legal protections and requirements that might apply to client information, including:

  • HIPAA (Health Insurance Portability and Accountability Act of 1996)[vii]
  • The European Union’s General Data Protection Regulation (GDPR)[viii]
  • The California Consumer Privacy Act (CCPA)[ix] (and/or other State Privacy Laws)
  • Trade Secret Protection[x] (which may be compromised by “disclosure” to the AI service)
  • Contractual Non-Disclosure Agreements and Obligations

The Florida Ethics Opinion regarding the use of Generative AI advises that existing ethics opinions regarding prior technological advances (such as cloud computing, electronic storage disposal, remote paralegal services, and metadata) have “addressed the duties of confidentiality and competence and are particularly instructive” and generally conclude that a lawyer should:

  • Ensure that the provider has an obligation to preserve the confidentiality and security of information, that the obligation is enforceable, and that the provider will notify the lawyer in the event of a breach or service of process requiring the production of client information;
  • Investigate the provider’s reputation, security measures, and policies, including any limitations on the provider’s liability; and
  • Determine whether the provider retains information submitted by the lawyer before and after the discontinuation of services or asserts proprietary rights to the information. [xi]

The California Practical Guidance for the Use of Generative Artificial Intelligence reinforces this responsibility and further suggests that a lawyer who intends to use confidential information in a generative AI solution should anonymize client information as well as “ensure that the provider does not share information with third parties or utilize the information for its own use in any manner, including to train or improve its product.”[xii] These measures should include reviewing consulting with an IT professional as well as reviewing the program’s Terms of Use.

In the Terms of Use dated March 14, 2023, OpenAI advised that:

If you use the Services to process personal data, you must provide legally adequate privacy notices and obtain necessary consents for the processing of such data, and you represent to us that you are processing such data in accordance with applicable law. If you will be using the OpenAI API for the processing of “personal data” as defined in the GDPR or “Personal Information” as defined in CCPA, please fill out this form to request to execute our Data Processing Addendum.[xiii]

The updated Terms of Use, promulgated in November of 2023 and effective as of January 31, 2024, simply state that:

You are responsible for Content, including ensuring that it does not violate any applicable law or these Terms. You represent and warrant that you have all rights, licenses, and permissions needed to provide Input to our Services.[xiv]

ClaudeAI’s Acceptable Use Policy similarly prohibits users from “violating any natural person’s rights, including privacy law” as well as “inappropriately using confidential or personal information.”[xv]

Natalie A. Pierce and Stephanie L. Goutos of Gunderson Dettmer Law Firm note that challenges to the responsible use of GAI systems are actively being addressed by legal entities, from academic institutions to law firms, through methods such as “employee training, AI governance policies, and the formation of specialized AI task forces.” The authors emphasize the importance of recognizing existing countermeasures that aim to help mitigate risks associated with confidentiality concerns, while the framework for a lawyer’s responsible AI use continues to develop. For example, OpenAI’s April 2023 policy change allows users to disable chat history in ChatGPT. The company’s August 2023 update introduced an “enterprise-focused model that offers enhanced security protocols, sophisticated data analysis, and bespoke customization capabilities.” As the technology in Artificial Intelligence continues to evolve, Pierce and Goutos predict that a “majority of law firms and organizations will adopt custom experiences powered directly into their own applications, as well as prohibit the input of any confidential information into public GAI tools, which will substantially alleviate breach of confidentiality concerns.”[xvi]

A lawyer’s affirmative duty to reasonably communicate with his or her client is also implicated in this context. Model Rule 1.4 requires an attorney to “reasonably consult with the client about the means by which the client’s objectives are to be accomplished” and to explain relevant matters “to the extent reasonably necessary to permit the client to make informed decisions regarding the representation.” [xvii] To the extent use of ChatGPT and other AI services in connection with the representation of a client is contemplated, it is therefore important to discuss the potential risks and benefits with the client, so that an informed decision can be made.[xviii]


Next time, Herman reviews concerns over potential copyright (and patent) issues.

*On July 29, 2024, the ABA issued formal guidance for the use of GAI. Like much of the previous guidance and commentary, the ABA focused on (i) Competence, (ii) Confidentiality, (iii) Communication with Clients regarding the Use of AI, (iv) Candor Toward the Tribunal, (v) Supervisory Responsibilities, and (vi) the Reasonableness of Fees. Read more here.

 

[i] ABA Model Rules of Professional Conduct

[ii] ABA Model Rule of Professional Conduct 1.6(a) and (c)

[iii] Mostafa Soliman, Navigating the Ethical and Technical Challenges of ChatGPT 2023 (available at: https://nysba.org/navigating-the-ethical-and-technical-challenges-of-chatgpt/)

[iv] Florida Advisory Opinion 24-1

[v] Andrew Tarantola, OpenAI Says a Bug Leaked Sensitive ChatGPT User Data 2023 (available at: https://www.engadget.com/openai-says-a-bug-leaked-sensitive-chatgpt-user-data-165439848.html)

[vi] Open AI Privacy Policy

[vii] 42 U.S.C. §§ 1320d, et seq., and 45 C.F.R. ¶¶ 164.500, et seq.

[viii] Available at: https://gdpr.eu/tag/gdpr/

[ix] Cal. Civ. Code, §§ 1798.100, et seq.

[x] See, e.g., 18 U.S.C. §1839(3).

[xi]  Florida Advisory Opinion 24-1

[xii] California Practical Guidance for the Use of Generative Artificial Intelligence

[xiii] OpenAI Terms of Use, No.5(c) (updated March 14, 2023) (available at: https://openai.com/policies/terms-of-use, as of Oct. 27, 2023)

[xiv] OpenAI Terms of Use, (updated Nov. 14, 2023) (eff. Jan. 31, 2024) (available at: https://openai.com/policies/terms-of-use, as of March 30, 2024).

[xv] ClaudeAI’s Acceptable Use Policy (available at: https://www.anthropic.com/legal/archive/4903a61b-037c-4293-9996-88eb1908f0b2, as of March 30, 2024).

[xvi] Pierce and Goutos, supra, at pp.15-16.

[xvii] Rule 1.4: Communications

[xviii] California Practical Guidance for the Use of Generative Artificial Intelligence; Preliminary Guidelines on the Use of Artificial Intelligence by New Jersey Lawyers; Florida Advisory Opinion 24-1