Ethical Rules for Using Generative AI in Your Practice | Model Rule 8.4(g): Misconduct

Steve Herman is special counsel in Fishman Haygood’s Litigation Section. He has served on the American Association for Justice (AAJ) Board of Governors since 2014 and currently serves as Chair of the AAJ’s AI Task Force. He also serves on the standing Louisiana State Bar Association (LSBA) Rules of Professional Conduct Committee and has given numerous presentations on the use of AI in the legal profession. In this biweekly series, he identifies ethical rules for generative AI usage in law practice. Read his previous analysis of Model Rule 5.5 here.


At the risk of stating the obvious, we are still in the early days of what we believe to be an “AI Revolution” in the way that goods and services, including legal services, are and will be provided. Which means that we do not, at this point, have much in the way of formal guidance.*

With that preface, this series has examined some of the Professional Rules[i] and other legal requirements that could potentially be implicated by a law firm’s use (or non-use) of ChatGPT or other Generative AI (GAI). Last time, we reviewed the Model Rule related to the unauthorized practice of law. For our final installment, we leave readers with a question related to whether the bias inherent in some of these tools leaves the door open for professional misconduct.

Model Rule 8.4(g)

Model Rule of Professional Conduct 8.4(g) asserts that it is professional misconduct for a lawyer to engage in conduct that is harassment or discrimination. So, the question has been raised:

Given the bias that exists in some of these products and services, might the use of such AI technology result in potential “discrimination on the basis of race, sex, religion, national origin, ethnicity, disability, age, sexual orientation, gender identity, marital status or socioeconomic status in conduct related to the practice of law”?[ii]

Final Thoughts

As the use of GAI in the legal field grows, it will continue to change the practice of law. While this change has the potential to be positive, the use of Advanced AI technology’s advancement raises significant ethical concerns. At present, formal guidelines specific to AI are sparse, leaving us to anticipate potential issues that clients, Disciplinary Counsel, and the Courts might view as breaches of ethical and professional standards. Technological advancements in services like ChatGPT and other Generative AI can rapidly alter or complicate these concerns. Therefore, while exploring the implications for law firms under Professional Rules and legal requirements, it is crucial to maintain competence in understanding these technologies’ risks and benefits, ensuring the reliability and ethical integrity of legal work, as well as safeguarding client confidentiality and data security.


This is the final installment of Herman’s series, “Ethical Rules for Using Generative AI in Your Practice.” You can find all previous rules on Fishman Haygood’s News and Resources page.

*On July 29, 2024, the ABA issued formal guidance for the use of GAI. Like much of the previous guidance and commentary, the ABA focused on (i) Competence, (ii) Confidentiality, (iii) Communication with Clients regarding the Use of AI, (iv) Candor Toward the Tribunal, (v) Supervisory Responsibilities, and (vi) the Reasonableness of Fees. Read more here.

 

[i] ABA Model Rules of Professional Conduct

[ii] Rule 8.4: Misconduct