AI Made Friendly HERE

AI-assisted cheating could impact universities’ global standings: QS

A series of artificial intelligence (AI)-related cheating scandals at Korean universities could carry long term risks for their global rankings, potentially weighing negatively on their reputation scores.

While Korea’s top universities face growing pressure to adapt to AI, most institutions have yet to translate that urgency into concrete action.

QS, the global higher education analytics firm that publishes widely cited university rankings, said AI-related academic misconduct controversies could affect how universities are ranked.

In response to a query from The Korea Times, QS said such incidents are not assessed directly, but could be reflected indirectly in academic and employer reputation scores — indicators that carry significant weight in their global rankings.

“History shows that sustained reputational damage from governance failures to academic misconduct can, over time, shape how institutions are viewed by global academic and employer communities,” said Simona Bizzozero, QS communications director.

She added that the firm’s reputation surveys, which are perception-based, large-scale and conducted over time, can reflect broader shifts in confidence or concern toward an institution.

QS said universities’ capacity to manage AI responsibly is becoming an increasingly important consideration in higher education assessments.

“The rapid spread of generative AI has driven deeper engagement with universities, policymakers and employers on issues ranging from assessment design to academic integrity and governance,” Bizzozero said.

While QS has no immediate plans to add AI governance or academic integrity as standalone indicators in its global rankings, the firm said both issues are central to its ongoing research and sector engagement.

As part of that work, QS has developed an open-source AI Capability Framework and a related assessment tool to help institutions evaluate their readiness to deploy AI responsibly and ethically across governance, teaching and research.

A sign at the gate of Yonsei University’s main campus in Seoul, Dec. 1, 2024 / Newsis

A sign at the gate of Yonsei University’s main campus in Seoul, Dec. 1, 2024 / Newsis

Despite mounting criticism after a series of AI-related academic misconduct cases, Korean universities have been slow to respond, with measures largely limited to post-incident follow-up.

Yonsei University, for example, had AI ethics guidelines but, despite them, has faced a series of technology-assisted misconduct cases since last year. It has yet to spell out concrete follow-up measures or broader systemic changes.

Local media reported on Monday a group cheating case involving the manipulation of clinical training photographs by students at Yonsei University’s College of Dentistry.

A total of 34 out of the department’s 59 students — roughly 60 percent — submitted altered images as part of a practical training course, despite treating patients directly under faculty supervision at the university-affiliated dental hospital.

In November last year, about 194 of roughly 600 students enrolled in a fully online course on natural language processing and ChatGPT were found to have used AI to cheat on a midterm exam.

While the university’s most recent AI guidelines advise faculty members to state their policies on the use of AI tools in course syllabi, the university acknowledged that the measure is not mandatory and cannot be enforced.

“The guidelines function more as recommended practices than enforceable rules,” an official at the university said.

They added that the university has plans to update its AI guidelines, aiming to release revisions before the upcoming semester begins, but said they were uncertain whether the update will be finalized as planned.

Asked for a response to QS’s statement that AI-related academic misconduct could carry reputational implications, the university said it had nothing to add.

At Seoul National University, cases of AI misconduct were first identified during a midterm exam for a statistics course in October last year.

Additional incidents of online cheating surfaced during final exams in other courses, despite tighter oversight measures, such as a system designed to track off-screen activity during exams.

In response, the university announced new AI guidelines on Thursday that allow the use of AI tools in principle, while placing responsibility for AI-generated output on the user.

Under the framework, instructors are given discretion to determine whether AI use is permitted in their courses, with students facing penalties for academic ethics violations if they use AI in ways explicitly prohibited by faculty.

The university was not immediately available for comment when asked about QS’s statement that AI-related academic misconduct could have reputational implications.

Originally Appeared Here

You May Also Like

About the Author:

Early Bird