Examining Ethical Rules for Using Generative AI in Your Practice | Model Rule 3.3: Candor Toward the Tribunal
August 9, 2024
Steve Herman is special counsel in Fishman Haygood’s Litigation Section. He has served on the American Association for Justice (AAJ) Board of Governors since 2014 and currently serves as Chair of the AAJ’s AI Task Force. He also serves on the standing Louisiana State Bar Association (LSBA) Rules of Professional Conduct Committee and has given numerous presentations on the use of AI in the legal profession. In this biweekly series, Steve reviews ethical considerations for using generative AI in your law practice. Click here to read his prior analysis of ABA Model Rule 1.1: Competence.
At the risk of stating the obvious, we are still in the early days of what we believe to be an “AI Revolution” in the way that goods and services, including legal services, are and will be provided. Which means that we do not, at this point, have much in the way of formal guidance.* The best we can do is identify potential issues that could be seized upon by our clients, Disciplinary Counsel and/or the Courts as an arguable violation of the ethical and professional standards and rules.
With that preface, in this series we will examine some of the Professional Rules[i] and other legal requirements that could potentially be implicated by a law firm’s use (or non-use) of ChatGPT or other Generative AI (GAI). In our first post, we discussed Model Rule 1.1 and a lawyer’s duty to stay informed about the capabilities and limitations of GAI as a legal tool. Read on to learn more about navigating AI use in the Courts.
Model Rule 3.3: Candor Toward the Tribunal
Related to the general responsibility to understand and account for any limitations in the technology is the responsibility of candor to the court. Model Rule 3.3, in this regard, prohibits lawyers from knowingly:
- Mak[ing] a false statement of fact or law to a tribunal, or failing to correct a false statement of material fact or law previously made to a tribunal by the lawyer; and/or,
- Failing to disclose to the tribunal legal authority in the controlling jurisdiction known to the lawyer to be directly adverse to the position of the client and not disclosed by opposing counsel.[ii]
Rule 11 of the Federal Rules of Civil Procedure addresses representations to the court.[iii] The sanction of the lawyers in Mata v. Avianca, Inc., premised on Rule 11, was based largely on the law firm’s refusal to correct the record after the lawyers became aware of the fact that the citations provided by ChatGPT did not exist.[iv]
With respect to subsection (a)(2) of Model Rule 3.3, it has also been noted that if a lawyer relies too much on an AI service’s response to a particular prompt, he or she may not be able to know whether there is adverse legal authority in the jurisdiction. This is especially true when the prompt only seeks support for the client’s position.
ChatGPT or other GAI programs should supplement, not substitute, a lawyer’s work. It is imperative that you rely on your own professional knowledge and judgment when producing work. This technology is another tool in an attorney’s toolbox. If an attorney builds something faulty, it is not AI’s fault.
Next time, Herman discusses the supervision of associates and non-lawyer assistance.
*On July 29, 2024, the ABA issued formal guidance for the use of GAI. Like much of the previous guidance and commentary, the ABA focused on (i) Competence, (ii) Confidentiality, (iii) Communication with Clients regarding the Use of AI, (iv) Candor Toward the Tribunal, (v) Supervisory Responsibilities, and (vi) the Reasonableness of Fees. Read more here.
[i] ABA Model Rules of Professional Conduct
[ii] Rule 3.3: Candor Toward the Tribunal
[iii] Rule 11: Signing Pleadings, Motions, and Other Papers; Representations to the Court; Sanctions
[iv] Mata v. Avianca, supra, 2023 U.S.Dist.LEXIS 108263 at **2-3