Ethical Rules for Using Generative AI in Your Practice | Model Rule 1.5 and “Black Box” Concerns

Steve Herman is special counsel in Fishman Haygood’s Litigation Section. He has served on the American Association for Justice (AAJ) Board of Governors since 2014 and currently serves as Chair of the AAJ’s AI Task Force. He also serves on the standing Louisiana State Bar Association (LSBA) Rules of Professional Conduct Committee and has given numerous presentations on the use of AI in the legal profession. In this biweekly series, he identifies ethical rules for generative AI usage in law practice. Read his previous analysis of potential copyright (and patent) issues here.


At the risk of stating the obvious, we are still in the early days of what we believe to be an “AI Revolution” in the way that goods and services, including legal services, are and will be provided. Which means that we do not, at this point, have much in the way of formal guidance.*

With that preface, in this series we will examine some of the Professional Rules[i] and other legal requirements that could potentially be implicated by a law firm’s use (or non-use) of ChatGPT or other Generative AI (GAI). Last time, we discussed the ways in which GAI use operates within current copyright and patent laws. Read on to learn about how GAI creates questions around legal fees and why the “black box” problem of deep learning models may pose a threat to your practice.

Model Rule 1.5 and “Black Box” Concerns

Model Rule 1.5

Model Rule of Professional Conduct 1.5 from the ABA addresses fees. According to the Rule, a lawyer’s fees must be reasonable. The first factor to be considered in determining the reasonableness of a fee is the “time and labor required, the novelty and difficulty of the questions involved, and the skill requisite to perform the legal service properly.”

What fee is “reasonable” considering the time and skill either saved by using, or wasted by not using, available AI technology?

“Black Box” Concerns

Most modern forms of artificial intelligence run what is called a deep learning system. This system is trained by feeding it correct examples of something you want it to recognize, and eventually, it will be able to categorize things that it has never experienced due to its inclination to find trends. While this artificial neural network is fascinating, the inability to track which inputs are influencing the system’s decision-making can be an issue. This is known as the “black box problem.”

Could either the information submitted to an AI service and/or the “training” of the AI service directly or indirectly benefit a litigant or other party whose interests are adverse to the client for whom the AI service is procured? (And/or another former or existing client of the firm?) [ii]


Next time, Herman reviews Model Rules 1.7 and 1.8.

*On July 29, 2024, the ABA issued formal guidance for the use of GAI. Like much of the previous guidance and commentary, the ABA focused on (i) Competence, (ii) Confidentiality, (iii) Communication with Clients regarding the Use of AI, (iv) Candor Toward the Tribunal, (v) Supervisory Responsibilities, and (vi) the Reasonableness of Fees. Read more here.

 

[i] ABA Model Rules of Professional Conduct

[ii] See generally: ABA Model Rules of Professional Conduct 1.6 – 1.9.