WITH artificial intelligence (AI) being applied in medical schools, students need to be taught to be sceptical and responsible users of these tools, emphasises Dr Manraj Singh Cheema (pic).
“In clinical settings, the risk is even greater if a student blindly accepts an AI’s diagnostic suggestion without verifying it; that can lead to real harm,” the Universiti Putra Malaysia (UPM) Faculty of Medicine and Health Sciences senior lecturer in molecular toxicology told StarEdu.
Manraj, a recent recipient of the Teaching Award in the Health Sciences category, bestowed by the Higher Education Ministry at the 16th National Academic Awards, added that in medicine, the danger of automation bias – the tendency to trust AI output even when it is flawed – could mean missing a rare diagnosis because the algorithm was trained on common cases.
“To guard against this, we need better AI explainability, robust clinical validation, and, most importantly, human oversight.
“No matter how advanced the tool, the final responsibility must stay with the clinician. AI must augment, not replace, clinical judgement,” he stressed.
Pointing to overdependence as a major ethical concern in AI use, he said students may stop thinking critically if they lean too heavily on AI-generated answers.
“There’s also the issue of undisclosed use; submitting work that was largely written by AI without understanding it raises questions about authorship, integrity and competence,” he added.
The Young Scientists Network-Academy of Sciences Malaysia member, who leads national and Asean-level training on the responsible conduct of research, called on medical schools to do more to prepare students for an AI-influenced future.
“There’s growing awareness, but most curricula don’t yet include AI literacy or digital ethics in a structured way. That needs to change.
“AI is not future tech anymore; it’s already trickling into our hospitals and clinics. We should be equipping students not just to use it, but to question it, audit it and shape it ethically,” he said.
Manraj underscored the importance of critical thinking, clinical judgement and ethical discernment in the age of AI.
“Doctors need to become AI-literate – not necessarily coders, but competent users who understand how these systems work, what their limitations are, and how to question their outputs. We also need humility – knowing when to defer to the machine – and courage – knowing when to override it,” he said.
He highlighted that the real value of AI lies in students using it to challenge their understanding, not to shortcut their learning.
“If a student copies and pastes AI-generated content and submits it as their own thinking, then that’s a form of academic dishonesty. But if a student uses AI to generate ideas, clarify explanations and then builds on that with proper citation or disclosure, it becomes a legitimate tool.
“The ethical line is in whether you own the intellectual process,” he explained.
Manraj noted that at the national level, Malaysia now has both the Science, Technology and Innovation Ministry’s National Guidelines on AI Governance and Ethics and the Malaysian Medical Council’s Guideline on the Ethical Use of AI in Medical Practice.
“These documents lay out core ethical principles that doctors and institutions should follow, such as accountability, transparency and patient safety,” he said.
At the institutional level, however, there is a need for clear policies on disclosure, authorship, and acceptable AI use in academic work, Manraj asserted.
“For clinical training, there should be defined protocols for when and how AI tools can be used, how their outputs should be verified, and how students are assessed – not just for their tool usage, but for their judgement in using it.
“But more than policies, we need conversations – open, critical discussions that help students develop ethical instincts in a rapidly changing digital landscape. After all, ethics isn’t just policy; it’s practice,” he said.
To students, Manraj offered this advice: “Use AI like a whiteboard, not a crutch. Let it help you brainstorm, test ideas or explain tricky topics, but never let it do your work for you.
“If you’re writing an assignment, ask AI for structure or prompts, but write the content yourself. Always fact-check, and if you’re using AI in any significant way, be transparent about it. That’s part of being ethically accountable, and it shows you understand, not just copy.”
Serene, 17, a student in Perak, is a participant of the BRATs Young Journalist Programme run by The Star’s Newspaper-in-Education (Star-NiE) team.
For updates on the BRATs programme, go to facebook.com/niebrats.
