AI Made Friendly HERE

UW Language Institute hosts discussion on need for ethical guidelines in AI research – The Badger Herald

University of Wisconsin Language Institute hosted a brownbag discussion titled “AI and Questionable Research Practices: Ethical Considerations for Authors, Journal Editors and Reviewers” given by applied linguistics professor at Northern Arizona University Luke Plonsky.

A main appeal of generative artificial intelligence models for academics is automating part of the research process to improve efficiency, but, this competes with a researcher’s desire to learn and be familiar with the data, Plonsky said.

“We need to think about the ethical issues first, then about the quality,” Plonsky said. “Are [academics] willing to reduce our efficiency for the sake of being more ethical researchers? That might be hard.”

Approximately 26% of linguistics researchers said AI is never okay in academia, with the remainder of researchers and most scholarly journals agreeing that there are practical applications for it, according to a 2023 study Plonsky referenced.

Advertisements

There’s been an uptick in repeated words across journal abstracts, such as “delve” and “exhibited” after the release of ChatGPT, indicating a clear use of AI as an academic research tool and a subsequent homogenization of academic abstracts, according to a Center for Open Science study Plonsky referenced. 

“Removing the human from the design of the research … I don’t think it’s healthy for knowledge generation,” Plonsky said.

Despite the explosion of and reliance on AI, a standardized framework regarding the ethics of AI use in academic research are undeveloped, according to Plonsky.

Though some journals started requiring transparency forms around AI usage to be completed by authors, it only accounts for AI authorship disclosure and not research practices involved in the study that used AI, Plonsky said.

“All our ability to assess quality in all aspects of research rests on the notion of transparency,” Plonsky said. “Transparency builds trust, increases reproducibility, enables secondary data reanalysis.”

The guidelines that exist for AI use in scholarly research varies by journal, with research paper authorship being one of the only consistent restrictions across sites, Plonsky said. 

“The problem with this approach of deferring to journals and learned societies and publishers is that we may be misaligned with publishers,” Plonsky said. “[Journals’] goals of making money are not necessarily aligned with [academia’s] goals of advancing knowledge.”

Researchers and journals should be assessing next steps based on the questionable research practice framework, Plonsky said. 

QRP framework acknowledges unavoidable biases that make it difficult to categorize research methods as purely unethical or ethical, according to Plonsky.

“Living in the gray space is a way to move away from categorical statements of right and wrong to determine whether and when certain AI uses might be acceptable, appropriate, effective, fitting or not,” Plonsky said.

Advertisements


Originally Appeared Here

You May Also Like

About the Author:

Early Bird