AI Made Friendly HERE

Generative AI and legal ethics

Listen to this article

The number of lawyers sanctioned for citing fake cases or quotes created by Generative Artificial Intelligence tools continues to grow.

Earlier this summer, U.S. District Judge Thomas Cullen ordered counsel to show cause as to why she should not be sanctioned under Fed.R.Civ.Pro. 11 and also referred her to the state bar for disciplinary proceedings because she cited multiple fake cases and used fake quotations in a filing.  See, Iovino v. Michael Stapleton Associates, LTD, 2024 U.S. Dist. LEXIS 130819 (W.D. Va July 24, 2024).

In his scathing opinion, Cullen joined judges from New York Massachusetts and North Carolina, among others, by concluding that improper use of AI generated authorities may give rise to sanctions and disciplinary charges.

In Iovino, Cullen issued his order after he could not verify several cases and quotes submitted by plaintiff’s counsel. He held that attorneys who fail to ensure that filings are accurate or those who submit filings with fabricated case law or quotations should face scrutiny.

Cullen was particularly troubled by counsel’s conduct after the fake authorities came to light. He directed counsel to provide supplemental authority and asked her to explain why the prior briefing contained fake citations. Counsel provided supplemental authorities, but she did not explain “where her seemingly manufactured citations and quotations came from and who [was] primarily to blame for this gross error.”

To Cullen, “[T]his silence is deafening.” (Aside: if you read my columns regularly, you’ll know I would have advised the lawyer to answer the judge’s questions directly).

It is obvious that a lawyer should not cite fake cases or use fake quotes in a brief. It is likewise obvious to state that GAI in the legal profession is here to stay. But what is not obvious is how GAI will impact the legal profession. Changes come fast.

As a result, on July 29, 2024, the American Bar Association Standing Committee on Ethics and Professional issued Formal Opinion 512 on Generative Artificial Intelligence Tools. The ABA Standing Committee issued the opinion primarily because GAI tools are a “rapidly moving target” that can create significant ethical issues. The committee believed it necessary to offer “general guidance for lawyers attempting to navigate this emerging landscape.”

The committee’s general guidance is helpful, but the general nature of Opinion 512 it underscores part of my main concern — GAI has a wide-ranging impact on how lawyers practice that will increase over time. Unsurprisingly, at present, GAI implicates at least eight ethical rules ranging from competence (Md. Rule 19-301.1) to communication (Md. Rule 19-301.4), to fees (Md. Rule 19-301.5), to confidentiality, (Md. Rule 19-301.6), to supervisory obligations (Md. Rule 19-305.1 and Md. Rule 305.3) to the duties of a lawyer before tribunal to be candid and pursue meritorious claims and defenses. (Md. Rules 19-303.1 and 19-303.3).

As a technological feature of practice, lawyers cannot simply ignore GAI. The duty of competence under Rule 19-301.1 includes technical competence, and GAI is just another step forward. It is here to stay. We must embrace it but use it smartly.

Let it be an adjunct to your practice rather than having Chat GPT write your brief. Ensure that your staff understands that GAI can be helpful, but that the work product must be checked for accuracy.

After considering the ethical implications and putting the right processes in place, implement GAI and use it to your clients’ advantage.

Craig Brodsky is a partner with Goodell, DeVries, Leech & Dann LLP in Baltimore. For over 25 years, Brodsky has represented attorneys in disciplinary cases and legal malpractice cases, and he has served as ethics counsel to numerous clients. His column appears monthly. He can be reached at [email protected].

 

Originally Appeared Here

You May Also Like

About the Author:

Early Bird