AI Made Friendly HERE

A cautionary tale for academia – and everyone else – The Mail & Guardian

Artificial intelligence can be a powerful ally but only if we cultivate the skills and habits that affirm our commitment to truth, discernment and verification. Graphic: John McCan/M&G

I recently sat in a departmental colloquium where students were defending their research proposals before a panel of academics. Anyone who has gone through this exercise will attest the process of defending your master’s or PhD proposal is, at best, a daunting and nerve-racking experience.

The task is simple in theory but difficult in practice. The panel is seeking the student to prove their proficiency in conducting the research and clearly showing the gap their proposed study addresses. All this, within 10 to 15 minutes, to an audience in the room (mostly online nowadays) but also an audience that is referred to as the theory, policy and practitioner press. In the corner of the student (hopefully) are the watchful eyes and muted voices of their supervisor or supervision team, who themselves stand on trial before their academic peers. The result is a delicate dance, where the spoken word must align seamlessly with the written proposal.

As one student delivered their presentation, my attention was caught by their mention of an article allegedly authored by me, published in the Journal of Business Ethics. A quick glance at their supporting documents confirmed my worst fear. I have never published a paper in that journal. Further to this, I don’t even research or write in the field of business ethics.

So, what had happened?

The student had fallen victim to what is now widely known as an AI hallucination. In simple terms, they had placed their trust in the output of an artificial intelligence tool, which generated what looked like credible information about their topic and about me but which was fabricated.

For the student, the AI-generated information seemed real. It said all the “right” things and cited the kind of references a proposal defence panel would expect to hear and see. Yet, the result was false, misleading and nonsensical. What was missing was a critical process of verification needed long before the student could even be deemed to be ready to take part in this proposal defence.

What we saw here was a double-layered false confidence. First, the false confidence of the AI itself. This came in the form of confidently making connections based on user prompts, some factual, others wholly fictional. Second, the false confidence of the human user through presenting AI hallucinations as fact, without adequate scrutiny, driven perhaps by the desire to impress a panel at all costs. 

What happened to the student?

I choose to reflect on that last, because what happened to us as supervisors was equally instructive and worth reflecting upon. The experience (including the imaginary Journal of Business Ethics paper) became, for me, what sociologist Charles Horton Cooley called a “looking glass self”. I began to see aspects of myself and my supervision practice through the mirror held up by the student’s mistake. 

I prefer to describe what the student did as a mistake, rather than a punishable offence or as one leading survey in the United Kingdom called it, a violation of academic integrity. This incident sparked months of reflection for me.

In a sobering way, I realised that my own experience with AI was not so different from the student’s. Like our students, we supervisors are also searching for timely information to meet pressing demands. Like our students, we too struggle under the weight of information overload, turning to tools like AI to help us navigate the maze. And, like our students, we must also develop and exercise a critical eye in the face of what may appear to be technological progress.

How did we respond as supervisors?

For starters, given the growing popularity of AI among our students, some of us as supervisors felt the need to use such technology ourselves, to stay abreast of changes in the academic and professional landscape. It meant moving out of our comfort zones into spaces of discomfort, just to keep pace with what is happening.

Some supervisors were quick to praise the functionality AI offers. For instance, using an AI tool to analyse large amounts of data in a short space of time was seen as a significant benefit. Others highlighted how AI could help students develop their writing and critical thinking skills provided that students’ own voices remained central to the work, rather than being drowned out by the machine-generated content.

We are truly living at the height of a technological moral panic, a time when our ability to exercise our executive functioning skills is being eroded precisely when we need them the most. It is a period in which voices of falsehood are legion, spreading at the mere click of a button, often without verification or reflection. Yet, this is also the very moment when we must be most vigilant and rise to the task of cultivating the skills and habits that affirm our commitment to truth, discernment and verification.

Through the experience of watching students present their research proposals, we came to realise that our struggles are, in fact, the same; they just take different forms. As supervisors in our department, we embarked on a month-long dialogue with our students, acknowledging and praising the benefits of AI while also cautioning them about the dangers of AI hallucinations. Our hope is that this process proves beneficial for everyone involved. This benefit is anchored in helping students, supervisors, the university and ultimately society at large to achieve success rooted in both innovation and integrity.

AI can be a powerful ally but only if we, both students and supervisors, treat its outputs as a starting point for inquiry, not the final word.

Professor Willie Chinyamurindi is in the Department of Applied Management, Administration and Ethical Leadership at the University of Fort Hare. He writes in his personal capacity.

Originally Appeared Here

You May Also Like

About the Author:

Early Bird