AI Made Friendly HERE

AI misuse in Indian colleges and how professors are arresting the situation

– When creative writing professor Pranav V.S. in Bengaluru congratulated his student over text about a cultural performance, he received a reply that began with: “You can express your gratitude with a simple note or message. Here’s a suggestion: Thank you so much, Sir, for your kind words…”

His heart sank.

“Can college students these days not even compose a personal thank-you message without getting help from artificial intelligence (AI) tools?” he wondered. 

Prof Pranav, who teaches at St Joseph’s University in Bengaluru, is not alone in his despair over the use of AI by students.

Even as most Indian universities forbid using generative AI for assignments, students are drawn to the time-saving, modern tool, many professors told The Straits Times.

They said that students write entire essays and e-mails, as well as produce presentations, using generative AI. Instead of taking class notes and reading books and technical modules, they feed the PDFs into their favourite AI tools to generate simple summaries of three or four pages.

Professors worry that students’ use of AI disrupts the learning process, fosters impatience with studying and diminishes the importance of writing and reading.

Prof Pranav said it also drives a wedge in interactions between teachers and students. 

“It’s a shortcut for students. But for teachers, it’s more laborious to separate the AI content from the students’ content. The joy of teaching is gone,” said Dr Adil Hossain, who teaches history and sociology in Azim Premji University in Bengaluru.

Independent thinking is vanishing

Assistant Professor Ananya Mukherjee from Azim Premji University in Bhopal, who has been teaching biology for nine years, shared that despite her picking controversial topics like genetics and reproductive choice to encourage her students to express their genuine opinions, many students use AI tools ChatGPT and Gemini to come up with talking points for class discussions.  

“Independent thinking, which is the whole point of science, is getting lost,” she said. 

Assistant Professor Prem Sagar, who teaches computer applications at St Joseph’s University in Bengaluru, told ST that he often faces the challenge of balancing his efforts between teaching his technical students how to build and train AI models for the future and discouraging their misuse in the present. 

“AI is good at debugging errors and completing code, but when students depend on it entirely, their logical reasoning – which is what programming is all about – takes a hit,” he said.

Still, students defend their use of AI as natural and inevitable. 

“What’s wrong with using an efficient way to learn?” asked computer engineering graduate Tejas P.V., 22. 

“AI saves time. It helps us research by locating references. For lengthy, boring 120-page documents that professors assign us, AI helps to identify the crucial 30 pages for us to focus on,” he added. 

But while Mr Tejas said he used AI largely for research and expanding his “own short points into full sentences”, he admitted that he, too, has generated “entire AI essays in high-credit subjects” that he felt were “not important”.

Ms Keerthana S., 21, who is pursuing a bachelor’s degree in environmental science in a Bengaluru college, said that “ChatGPT is always a temptation”, especially when deadlines are close. 

In a group project to calculate the carbon footprint of shops within the college neighbourhood – a hyperlocal assignment her teacher had clearly crafted to force the students to avoid AI tools – Ms Keerthana attempted to use ChatGPT to generate what she called “a cool introduction”. 

“But the language was so technical, jargon-filled and so unlike my writing that I decided to write the introduction myself,” she said, adding that generative AI’s high energy consumption also gives her second thoughts.

According to some estimates, interactions with AI tools such as ChatGPT could consume 10 times more electricity than a standard Google search.

Another engineering student said that he often uses AI tools to complete his programming code because, as he explained, “even after I get a job, my bosses are not going to expect me to waste time on manually doing these basic things.” 

According to English professor Greeshma Mohan, students use AI because of insecurities that their own writing and ideas are not good enough, and because AI “sounds fancier”.

Teaching in an English-medium college in the small town of Bhopal in central India, where many students come from Hindi-medium schools, Prof Mohan said she empathised with their anxiety.

However, she is worried that “if they didn’t experiment without the use of AI and get things wrong”, they would never learn. 

Even after she welcomed fragmented sentences and inconsistent tenses as long as it was the students’ own work, students were “already too dependent on AI to stop using it”.

“How can I help a student whose mistakes I never see? Then what am I doing here as a teacher?” she asked. 

See if you can penalise me for using AI

The fear of repercussions is often the sole deterrent to students against using AI, said many teachers.

Most major Indian universities require every instance of AI misuse to be reported, but as the scope of generative AI is still evolving, professors are also granted the flexibility to determine appropriate disciplinary measures. 

Some faculties are strict and will fail a student. Some ban all gadgets in class, and assign only handwritten essays.

Others permit grammar corrections or AI-assisted research, while some require students to rewrite their essay multiple times until it is completely AI-free.

While most Indian universities use plagiarism trackers, and detecting AI use is now a function of software like Turnitin and the Indian-developed tracker DrillBit, they are not foolproof.

In November 2024, law student Kaustubh Shakkarwar sued O.P. Jindal Global University in northern India’s Haryana state over being failed for allegedly using AI-generated content in an assignment on law and justice in the globalising world.

Claiming that he had done all the research himself, the student questioned the accuracy of the university’s Turnitin plagiarism detection software, also powered by AI, and said it had a history of generating false positives. 

The university finally issued Mr Shakkarwar a new academic transcript and revised its decision to fail him.

A practising lawyer today, he is ready to offer pro bono representation to “any student who wants to sue their college over AI use”.

However, many professors said they have often detected AI use, even in cases where detection software had not. Common indicators included the use of em dashes, sentences beginning with “that being said” and “all things considered”, and essays with a balance of opinions that seemed, well, artificial.

Some students also use humanising software like BypassGPT, WriteHuman and QuillBot to make AI-generated text read naturally and human-like, but many Indian students told ST that the best services were not affordable.

Most of all, teachers said they could tell if AI was used because they knew their students. 

“All of a sudden, a student writes fascinating prose. Who are they kidding?” asked Bengaluru-based AMC Engineering College Professor Pallavi K.V., who now conducts oral quizzes on the students’ own written assignments to determine if they have even read their AI-generated work before submission. 

Indian professors are now devising assignments and pedagogical innovations to subvert the use of AI. 

One anthropology lecturer asks for audio recordings of field interviews; a law professor crafts simulation exercises inspired by landmark cases; many others set live handwritten exams that students hate because they struggle to write longhand.

Dr Swathi Shivanand, who teaches historiography at the Manipal Academy of Higher Education in Bengaluru, said: “I suppose I have more failures than successes (at weeding out AI use).”

An effective assignment she devised involved asking students to imagine a dialogue between two historical figures.

Professors suggested that the key to escaping AI is to make assignments as personal and imaginative as possible. 

In Prof Pranav’s writing class, during a workshop session on horror stories, including those written by AI, the standout piece was an original story set in the college with characters named after some of the students. 

Ms Keerthana recalled “a brilliant assignment” – one that few classmates used AI for – in her environmental impact assessment class, where a teacher asked them to map all the processes and components that went into making a sewing needle.

Optimal use of AI

Recognising the use of AI as inevitable, some professors are upskilling themselves to stay a few steps ahead of their students.

For instance, Assistant Professor Arpitha Jain, who teaches English at St Joseph’s University in Bengaluru, said she gave her students printed copies of prescribed non-fiction readings. But, turning the tables, she used ChatGPT to generate multiple-choice questions for them to answer.

“They hated me for it, but, later, some of them applied this method (of generating short questions from long texts) to study other subjects closely,” she said.

Prof Sagar now trains other faculty in his university to use AI not just to build presentations and create lesson plans, but also to evaluate students and give more granular feedback, using data analytic tools that can notice patterns of performance and what modules someone is weak in. 

Prof Pallavi, concerned that her students were unable to tell when AI content was “wrong, biased, hallucinating or actually harmful”, said she now advocates responsible, conscious use of AI.

When job recruiters used data-driven AI models for resume scanning, she showed how the technology’s inherent sexist and racial biases resulted in a higher selection rate for men over women for software developer roles. Using the example of the viral Ghibli image trend, she also warned her students about uploading their photos for a moment of fun without realising how they may be putting their personal data at risk.

Dr Rahul Dass, a former journalist who now teaches at Mahindra University in Hyderabad, recently asked his class to give five different prompts to ChatGPT, Gemini and Copilot to generate an article about a major fire breaking out in a city. 

“The AI outputs all described the fire, but none of the articles began with the number of people dead and injured, as a journalist would have. I want students to understand these kinds of gaps in using generative AI,” he said. 

  • Rohini Mohan is The Straits Times’ India Correspondent based in Bengaluru. She covers politics, business and human rights in the South Asian region.

Join ST’s Telegram channel and get the latest breaking news delivered to you.

Originally Appeared Here

You May Also Like

About the Author:

Early Bird