Generative artificial intelligence promises to make legal work faster and more efficient, but it also poses a quandary for law firms: Should they tell clients they’re using the technology?
Cleary Gottlieb Steen & Hamilton hasn’t reached a definite conclusion on disclosure and will follow what clients want, managing partner Michael Gerstenzang said. But “there’s no circumstance in which I could imagine using it on a not fully-disclosed basis,” he added.
The disclosure question is starting to come up in law firm conversations with clients. It raises other questions for in-house and outside counsel—including whether certain uses of AI need to be disclosed, but not others, and whether an engagement letter is the best place for a firm to make disclosures.
“I’m a little leery of saying whenever I’m using a particular research tool, I have to talk to my client about what I’m using,” said Ron Hedges, a former US magistrate judge and member of the New York State Bar Association AI task force, and principal of Ronald J. Hedges LLC. “It’s more, to me, what data am I feeding into the research tool, does my client know I’m using it, and is my client aware that the training set might be used by others?”
State bar associations are weighing in. Earlier this month, the California Bar adopted guidance advising lawyers to consider disclosing AI’s use. The Florida Bar in a draft ethics opinion recommended lawyers get clients’ informed consent before using generative AI “if the utilization would involve the disclosure of any confidential information.”
A Massachusetts Institute of Technology task force this year suggested that “the terms of the client engagement include the use of technology and specifically address the responsible use” of generative AI. The group is seeking feedback on whether consent, not just disclosure, is needed.
‘Prudent’ Disclosure
Lawyers’ existing duties of confidentiality and competence sufficiently cover obligations while using AI tools, said Katherine Forrest, a partner at Paul Weiss, a former US District Court judge, and the author of two books about AI and the law.
But disclosing AI use isn’t “necessarily a bad thing, and it may in fact be prudent during this interim phase, while we’re all getting used to this new transformed world with new tools,” Forrest said.
Lawyers shouldn’t upload confidential information, such as the names of children involved in a family law case, she said. Generative AI doesn’t have “absolute certainty as to confidentiality,” though the technology is improving, she said.
Confidentiality is among the biggest concerns for lawyers using generative AI. If a firm puts a client’s data into an open, public system such as ChatGPT—in contrast to proprietary systems where the data is walled off—the AI system could reproduce the sensitive information in other contexts.
“I would have expected—and it’s certainly one of our principles—that lawyers would be disclosing generative AI use cases to their clients,” said Sabastian Niles, president and chief legal officer at Salesforce Inc. “It’s empowering. It enables the client to have that role in deciding whether or not they want” their firms using the technology in particular instances.
‘Ghost Around Every Corner’
Generative AI is increasingly baked into the technology lawyers touch every day, such as Microsoft Corp.’s AI Copilot, which is available in Word, and many common legal research tools.
Some members of the MIT group said, “at the end of the day, we’re all going to be using those tools—so what, really, here do we need to disclose?” said Megan Ma, a member of the MIT task force that drafted a set of principles for responsible use of AI in law.
“You’re opening up and using those tools no matter what,” said Ma, who is a fellow and the assistant director of the Stanford Program in Law, Science, and Technology and the Stanford Center for Legal Informatics.
The disclosure question will become less relevant as AI becomes more ubiquitous, said Jeffrey Saviano, global tax innovation leader at EY and also a member of the MIT task force.
“It’s getting harder and harder to define technology as either an AI tool or a non-AI tool,” Saviano said. “We’re getting pretty close to the point that everything will be at least somewhat influenced by AI.”
If there was a requirement to disclose all uses of AI, “you’re going to have a ghost around every corner,” he added.
Where to Disclose?
AI disclosure will most likely appear in engagement letters, but firms are watching for clients to take the lead.
Gerstenzang, the Cleary managing partner, said he expects client expectations on disclosure to dictate conversations about AI’s use and whether disclosure appears in engagement letters. The firm is already talking to clients about AI, he said.
AI for years has shown up in requests for proposals from clients, where in-house counsel ask firms how they’re using the technology to be more efficient, said Cat Casey, chief growth officer at the AI e-discovery company Reveal.
Disclosure “across the board is necessary,” said Casey, a member of the New York State Bar task force on AI, which this year developed bar policies and recommended legislation on AI. In-house lawyers should spell out AI policies in their outside counsel guidelines, she said.
Use of generative AI isn’t currently part of Paul Weiss’ standard engagement letter, Forrest said. The firm has clients who have asked it not to use generative AI without communicating with them, she said.
“Ultimately, you follow your client’s instructions,” she said.