AI Made Friendly HERE

Report: 22% of Richmond law firms may have AI-generated client reviews

In brief

  • Study: 34.4% of law firm reviews in 2024 likely AI-generated
  • Experts warn of false advertising and ethics violations
  • FTC rule allows penalties for fake reviews up to $51,744
  • Detection tools raise questions about reliability and use

A recent study has found an ongoing surge of AI-generated law firm reviews appearing online, raising the question of whether some professionals may be attempting to attract clients through false advertising in violation of consumer protection laws and rules of professional conduct.

Canadian tech company Originality.ai recently released a study on artificial intelligence-generated reviews of law offices in the United States. The findings from the late April study show that 34.4% of law office reviews created and posted so far this year are “likely” AI-written.

The study also found that AI-generated reviews rose 1,586% since ChatGPT’s 2022 launch.

The implications of the findings are profound, particularly given the expense of legal services, said Madeleine Lambert, Originality.ai’s director of marketing and sales.

“People rely on reviews to make informed decisions,” Lambert said. “If somebody is making a decision based on an array of amazing, perfect, 10-out-of 10 reviews that aren’t actually real, that are fabricated, is that actually informed decision-making?”

Lambert said 22% of reviews in the company’s dataset for Richmond were classified as AI-generated. That places the city roughly in the middle of the 49 major U.S. markets the company studied.

Boston ranked highest, with 58.3% of reviews classified as AI-generated, followed by Columbus, Ohio, at 50%. The company reported zero suspected AI-generated law office reviews in Montpelier, Vermont, and Juneau, Alaska.

“We believe the rise in AI-generated reviews is driven by the growing accessibility of generative AI tools, coupled with the temptation to fabricate or manipulate public perception,” Lambert said.

“As these tools become easier to use, both individuals and businesses can mass-produce reviews that don’t necessarily reflect authentic customer experiences,” Lambert continued. “While we’ve observed this trend in industries like e-commerce, its emergence in sectors where trust and ethics are even more societally important, is especially concerning.”

Beth Burgin Waller, chair of the cybersecurity and data privacy practice at Woods Rogers Vandeventer Black in Roanoke, said the ease and speed with which Generative AI can create content has increased the proliferation of fraudulent online reviews.

“While GenAI fake reviews may not have significantly impacted Virginia’s legal community, it is likely only a matter of time before the impact is felt on law firms in the commonwealth,” Waller said.

Waller, a member of the Virginia Bar Association’s Task Force on Artificial Intelligence, noted that like many cyber or internet crimes, identifying AI-review offenders and holding them accountable can be difficult.

“In order to combat these issues, law firms should be monitoring their online presence and using tools to check for GenAI content,” Waller said. “From there, you can flag the inappropriate content to the browser provider [such as Google] and follow their procedures to have the information removed.”

And with AI-generated reviews pervasive across all industries, it’s no surprise the issue would affect lawyers, according to Cullen Seltzer of Sands Anderson in Richmond.

Seltzer, whose practice areas include business torts, said he sees several reasons a lawyer might have AI-generated reviews.

“First, a disgruntled client or party might be seeking to actively damage the reputation of a lawyer,” he said. “So, as part of a campaign to injure the lawyer’s reputation or business, they enlisted a chatbot or other AI-tech to gin-up astro-turfed, fake reviews.”

Second, an online service may create AI reviews to lend credibility to the apparent reliability of its reviews by sheer numbers, a tactic that would also simultaneously boost the review service’s business success, according to Seltzer.

“Third, an attorney in competition with the lawyer being reviewed might use AI generated reviews to cast negative aspersions against his competitor or adversary. In the same vein, an attorney might use AI to generate positive reviews of his own work to try and pump up his review scores,” Seltzer said.

Suspect client reviews

According to the authors of the Originality.ai study, the results were obtained by filtering 4,706 law office reviews, each with 100 words or more, through the company’s AI detecting tool. Lambert said the company’s artificial intelligence software has been trained on millions of pieces of verified human-written content.

“It has also been fed millions of pieces of data of verified AI-generated content and has been trained to pick up the differences between the two [sets of data],” said Lambert. The reviews were collected using a search tool, Rapid API, which “bulk scraped” posts from business Google pages, where most reviews appear.

Lambert acknowledged that the AI-detection tool has certain limitations.

“Originality.ai provides highly accurate AI content detection that indicates the likelihood a piece of text was generated by AI,” she said. “However, like any detection tool, it provides a probabilistic assessment, not absolute proof.” In addition, Lambert says no AI detection tools currently on the market can “attribute authorship or intent.”

According to the study’s authors, while AI detection tools such as Originality.ai have sophisticated capabilities to analyze content for “tells” indicative of whether a post is the product of generative AI, there are also certain tells that can be picked up by the average reader.

Cullen Seltzer“Every tool that’s ever been used has been misused. … In connection with lawyer advertising, online or otherwise, the touchstone will be truthfulness.”

— Cullen Seltzer, Richmond

For example, a prompt used by an AI tool to guide users, such as “Here is the revised review,” may appear out of place in a post and is often a giveaway that the review is AI-generated. The authors also highlight that tools like ChatGPT sometimes use symbols such as asterisks around certain words, so another tip-off of AI-generation can be the appearance of symbols in text that would seem out of place in normal writing.

Finally, the authors say suspicions should be raised when text has “[o]ver-polished and general language, lacking personal or case-specific detail.”

And while it’s possible an AI-generated review is honest and the technology was used only to edit or polish a written review, “I’d be skeptical of that hypothesis because lawyers tend to pride themselves on their written communication and the use of AI, to disguise the author’s identity or to generate numerous reviews that relate to a single issue, also implicate questions of the reviewer’s honesty,” Seltzer said.

Ethical considerations

However the issue may arise, Seltzer said fake reviews are a concern for attorneys.

“Regardless of whether a lawyer generated fake AI reviews to falsely promote her own work or falsely smear another lawyer, the dishonesty inherent in a fake AI review would run afoul of Virginia’s ethics rules,” Seltzer said.

Josh Fairfield, director of AI Legal Innovation Strategy and a professor at Washington and Lee University School of Law, said a law office that posts or procures AI-generated fake reviews would be violating Virginia Rule of Professional Conduct 7.1. The rule prohibits lawyers from making misleading or false communications about their services.

“Reviews of all kinds are likely to stop being useful, as AI swamps real human descriptions of experiences with goods or services.”

— Josh Fairfield, Washington and Lee University

Fairfield also studies human-centered AI in the law. He highlighted a few considerations — and issues — regarding Originality.ai’s report.

First, an AI detection company that sells an AI detection product produced the study “so skepticism as to the claimed results is appropriate,” Fairfield said. “AI can’t reliably detect AI, because AI detectors rely on detecting set patterns in prior AI outputs.”

Secondly, Fairfield said, “even if an AI detector were invented that reliably detects AI use, it can’t tell you whether the AI was used by a client using AI to generate the review, or the lawyer’s office.”

Additionally, “a lawyer’s office isn’t responsible for client reviews legitimately generated by clients, and reviews are exactly the kind of throw-away bit of text that AI would be commonly used for.”

Fairfield also noted that news reports abound of how AI detectors can get it wrong. Still, he said, AI-generated reviews are a legitimate and growing problem.

“Reviews of all kinds are likely to stop being useful, as AI swamps real human descriptions of experiences with goods or services. AI detectors and accusing law firms of using AI to generate reviews aren’t going to help, though. The detectors are often wrong, and even if they’re right, don’t prove that the lawyer’s office is responsible.”

Seltzer echoed that sentiment.

“Every tool that’s ever been used has been misused,” Seltzer said. “We oughtn’t be surprised that the same fate has befallen generative artificial intelligence.  In connection with lawyer advertising, online or otherwise, the touchstone will be truthfulness.”

The federal government is also responding to the issue. The Federal Trade Commission in 2024 announced a final rule prohibiting the sale or purchase of fake consumer reviews or testimonials. The rule, which went into effect last fall, further prohibits certain insiders in a business from creating consumer reviews or testimonials without clear disclosure. Waller noted the rule authorizes the FTC to impose penalties of up to $51,744 per violation while also allowing consumer remedies.

Lambert said it’s hard for investigators to make a case over suspect law firm reviews.

However, she pointed out that investigators and litigants have tools at their disposal — including discovery procedures to reveal the identity of account holders, IP logs, timestamps, and other information in the control of law firm administrators — to answer key questions for purposes of establishing culpability for ethics violations or liability for false advertising.

“We do recognize that this opens up complex ethical and legal questions, particularly around manipulation of consumer perception in regulated industries like law and health care,” she said. “That’s exactly why we’re studying the prevalence and impact of AI-generated reviews in such sectors.”

Not naming names

Lambert says her company has firm-specific data on AI-generated reviews. But because its AI-detector produces a small percentage of false positives, the company is reluctant to “call out” specific firms.

“I do have a table that names the ‘worst offenders,’” Lambert said. “But these are law firms. We don’t want to bark up that tree.” On the other hand, she says her company would be amenable to contracting with regulators for the use of Originality.ai in their investigations.

The demand for AI-detection has already exploded in the private sector, she said.

“We’re seeing that a lot,” she said. “We’re seeing Google leverage AI-detection software in a number of instances, including verifying that posts are human-written content on advertising platforms so that they not throwing advertising dollars at pages that are just AI-generated.”

Originally Appeared Here

You May Also Like

About the Author:

Early Bird