KINGSTON, R.I. – May 28, 2024 – If you’ve heard the term deepfake, you may have some notion of how artificial intelligence can be used in communications. Earlier this year, a robocall purportedly of President Joe Biden discouraging voters from participating in the New Hampshire state primary prompted swift investigation by election officials as well as action by the Federal Communications Commission. More recently, TikTok announced its intention to label AI-generated content as the technology becomes more universal.
But how are professional communicators and public relations firms utilizing generative AI? And what are the ethical considerations for doing so? That’s what a new study from researchers at the University of Rhode Island, along with industry partner and independent public relations agency MikeWorldWide, aims to discover. The study, made possible through a $10,000 grant from The Arthur W. Page Center for Integrity in Public Communication at Penn State, will explore the drivers and barriers to the adoption of AI technology as well as perceptions of its ethical use.
“There is a lot of emerging technology available in the space of generative AI,” says Ammina Kothari, director of URI’s Harrington School of Communication and Media. “But when we talk about generative AI in the space of public relations, the types that apply very directly are things like image generation and cloning of voices.”
From a business perspective, she says, it’s much more cost-effective to utilize these types of technologies than creating a campaign that would involve purchasing images or hiring actors or photographers and paying for a shoot or recording session. However, ethical concerns remain.
“Use of AI technology in creative work is a growing trend and one I don’t think any company can completely avoid these days,” said Joon Kim, URI assistant professor of public relations and communication studies. “But an important question is what firms disclose to their clients and how they communicate about its use.”
Kothari and Kim agree that it is important that clients understand what they are paying for. There are also concerns about how an artist’s work may be co-opted by generative AI platforms. The two also note the similarity between tools and how the tools collect data to generate output.
“By their nature, PR and ad campaigns should be unique,” says Kothari. “So, if you are replacing what is – essentially – a very human contribution, and companies are using the same or similar tools, how do you differentiate?”
“AI will drive an array of evolutions for the public relations industry, from reporting, gathering insights and curating content,” says Bret Werner, president of MikeWorldWide and a URI alumnus. “However, what makes a successful PR strategy is its inherent understanding of connecting with audiences, which cannot be lost. By leveraging our own client partners and employees, we’ll unearth the industry’s view, acceptance, and concerns around this technology wave we find ourselves on.”
Research will commence this summer with a field study that will include in-depth interviews and research on industry use of generative AI tools, its challenges and ethical issues surrounding its use, which will help form the basis of a broader survey of public relations practitioners. Additionally, the team will develop and field a second survey of public relations clients and potential clients to ascertain their understanding of generative AI and perception of its use for strategic communication.
Study findings will provide a fuller picture of AI use in the public relations industry and client perceptions and uncover any disconnect between the two. Additionally, knowledge gained as to challenges and ethical issues surrounding its use will provide practitioners with insights that will help to pave the way for more effective, ethical, and transparent communication.