Examining Ethical Rules for Using Generative AI in Your Practice | Model Rule 1.1: Competence

Steve Herman is special counsel in Fishman Haygood’s Litigation Section. He has served on the American Association for Justice (AAJ) Board of Governors since 2014 and currently serves as Chair of the AAJ’s AI Task Force. He also serves on the standing Louisiana State Bar Association (LSBA) Rules of Professional Conduct Committee and has given numerous presentations on the use of AI in the legal profession. In this biweekly series, Steve reviews ethical considerations for using generative AI in your law practice.


At the risk of stating the obvious, we are still in the early days of what we believe to be an “AI Revolution” in the way that goods and services, including legal services, are and will be provided. That means that we do not, at this point, have much in the way of formal guidance. The best we can do is identify potential issues that could be seized upon by our clients, Disciplinary Counsel, and/or the Courts as an arguable violation of the ethical and professional standards and rules. At the same time, we must recognize that as these services continue to develop very rapidly, a technological advance (or, perhaps, a change in a provider’s Terms of Use or Privacy Policy) could, in a short period of time, either obviate or complicate further some of these potential issues and concerns.

With those caveats, some of the principal questions that have been raised in terms of an attorney’s use of (and, potentially, failure to use) services like ChatGPT and other Generative AI (GAI) technologies have generally fallen into two broad categories: (1) maintaining a general competence in understanding the risks and the benefits of the technology, and ensuring that the ultimate work product is reliable and consistent with an acceptable legal, ethical and professional standard of care; and (2) ensuring that attorney-client privileged and other legally protected information remains confidential and secure.

Is your firm using (or not using) ChatGPT or other Generative AI? In this series, we will examine various Professional Rules[i] and other legal requirements that could potentially be implicated.

Model Rule 1.1: Competence

Model Rule 1.1 of the ABA Model Rules of Professional Conduct requires general competence in the representation of a client.[ii] Official Comment [8] to the Rule advises that “a lawyer should keep abreast of changes in the law and its practice, including a reasonable understanding of the benefits and risks associated with relevant technology the lawyer uses to provide services… or transmit information.”[iii] As official statements on AI integration are released, the recurring message is that “the core ethical responsibilities of lawyers are unchanged”[iv] and careful engagement with the disruptive technology is advised to avoid any ethical violations.

While most of the focus has centered on the responsibility to understand and account for limitations in the use of ChatGPT and other similar services, some have suggested that the Rule also implies an affirmative duty to use appropriate AI technologies where the benefits outweigh the risks. For example, in terms of cost-savings to the client or even quality.

With respect to the risks, many have focused on what are sometimes called “hallucinations” – i.e., responses to prompts, which, while having all the objective signs of reliability, are factually inaccurate. Lawyers must be aware that “GAI products are not search engines that accurately report hits on existing data in a constantly updating database.”[v] GAIs are trained on datasets and are thereby limited by the information within them. Information, therefore, that may be out of date, biased, or incomplete. Additionally, GAI is not programmed to provide accurate reports of the information it has; rather, it is trained to create new content. “In the case of a request for something in writing, GAI uses a statistical process to predict what the next word in the sentence should be. That is what the ‘generative’ in GAI means: the GAI generates something new that has the properties its dataset tells it the user is expecting to see.”[vi]

For example, in the highly publicized case Mata v. Avianca, Inc., a law firm was sanctioned when the lawyers “abandoned their responsibilities when they submitted non-existent judicial opinions with fake quotes and citations”[vii] created by ChatGPT. Notably, the lawyer in that case specifically asked ChatGPT whether the cases he cited from the ChatGPT response were real or fake, and ChatGPT replied that it had supplied “real” decisions that could be found through Westlaw, LexisNexis and the Federal Reporter.[viii]

One oft-quoted authority in this area is David Curle, Director of the Technology and Innovation Platform at Thomson Reuters, who advises that: “If lawyers are using tools that might suggest answers to legal questions, they need to understand the capabilities and limitations of the tools, and they must consider the risks and benefits of those answers in the context of the specific case they are working on.”[ix]

Some have also pointed to Official Comment [5] to Model Rule 1.1 and suggested that over-reliance on an AI tool for legal research and analysis may violate the professional duty of “inquiry into and analysis of the factual and legal elements of the problem.”[x]

Concerning over-reliance on an AI product or service that might only provide a neutral or objective treatment of the law, Official Comment [1] to Model Rule 1.3 states that lawyers must “act with commitment and dedication to the interests of the client and with zeal and advocacy upon the client’s behalf.”[xi]

While GAI has the potential to be a powerful tool in the legal field, it also can pose threats to the core ethical duties of a lawyer, which remain the same. Because we are in the early days of its integration into the legal space, use this time to ensure you maintain competence in keeping abreast of and understanding technological risks and benefits.


Next time, Herman examines the importance of Model Rule 3.3: Candor to the Court.

 

[i] ABA Model Rules of Professional Conduct

[ii]  Rule 1.1: Competence

[iii] Official Comment [8]: Maintaining Competence

[iv] Preliminary Guidelines on the Use of Artificial Intelligence by New Jersey Lawyers

[v] DC Bar Ethics Opinion No. 388 (April 2024)

[vi] Id.

[vii] Mata v. Avianca Inc., No-22-1461, 2023 WL 41149

[viii] Id., at ¶ 45

[ix] See, e.g., David Lat, “The Ethical Implications of Artificial Intelligence” Above the Law: Law2020, (available at: https://abovethelaw.com/law2020/the-ethical-implications-of-artificial-intelligence/, as of Oct. 27, 2023).

[x] Official Comment [5]: Thoroughness and Preparation

[xi] Official Comment [1]: Client-Lawyer Relationship