PETALING JAYA: Generative AI may be fast, confident and seemingly intelligent – but blind trust in its answers can dull your thinking and spread misinformation, experts warn.
Universiti Malaysia Sarawak lecturer Chuah Kee Man said tools like ChatGPT may produce sophisticated responses but their outputs are not always reliable, nor are they based on genuine understanding.
“These models don’t really ‘think’ – they ‘predict’,” said Chuah, who specialises in educational technology and computational linguistics.
“They estimate the most likely response based on training data. That’s why you rarely get the same answer twice,” he said in an interview.
Even with browsing and fact-checking features, Chuah said AI still retrieves and summarises content without comprehension.
He said the polished nature of AI-generated text can mislead users into mistaking fluency for factual accuracy.
“In workshops, I often see people assume something must be true because it sounds sophisticated.
“But AI can confidently present outdated or false information. Its speed trains people to verify less and think less critically,” he said.
Chuah also cautioned against assuming that using AI equates to understanding how it works.
“Even experts are still trying to unravel how large models arrive at certain outputs, the so-called ‘black box’ problem.”
To use AI wisely, Chuah said users should view its output as a starting point and not as a conclusion.
“Stay curious but cautious. Treat AI as a helpful assistant, not an authority.
“Develop ‘prompt literacy’ because learning to phrase prompts well reduces the risk of being misled,” he said.
He added that image and video generators are equally prone to flaws as they assemble visuals based on probability.
“If we don’t blindly trust humans, we shouldn’t blindly trust machines either,” he said.
Assoc Prof Dr Geshina Ayu Mat Saat, a criminologist and psychologist at Universiti Sains Malaysia, said people are psychologically inclined to trust confident and structured answers, even from machines.
“This stems from cognitive biases, social conditioning and evolutionary traits.
“Authority bias, cognitive ease and the illusion of understanding all contribute,” she said.
Geshina said fluent and assertive AI responses often mimic traits that associate with expertise, triggering automatic trust even if the content is flawed.
“People fear uncertainty. A confident AI answer gives psychological relief.
“Our brains prefer smooth, simple explanations to complex or ambiguous ones,” she said.
To counter this, Geshina recommended a triangulation mindset which is to only accept AI responses when they align with at least two independent, credible sources.
She also encouraged delayed judgment, source awareness and failure literacy.
Alex Liew, chairman of the National Tech Association of Malaysia (Pikom), echoed similar concerns, saying AI tools rely heavily on data which includes false or biased information found online.
“AI isn’t inherently smarter than humans. It processes data using fixed rules, which makes its answers sound polished but not necessarily correct,” he said.
Liew said Pikom recently published a paper on AI Ethics and Governance, urging industry-wide accountability.
“AI helps us process massive data but it should never be the final arbiter. That role still belongs to humans.”
Prof Dr C. Sara of Universiti Teknologi MARA said despite the risks, generative AI has practical strengths when used responsibly.
“AI can generate articles in minutes and assist in producing large volumes of content, including personalised social media posts.
“AI tools can also help with language localisation, keyword suggestions for search engine optimisation and overcoming writer’s block through idea generation,” she said.
Sara, however, stressed the importance of accuracy.
“To avoid spreading misinformation or damaging your brand, cross-check AI content with trusted sources.
“Look for citations, spot inconsistencies and consult experts for niche topics,” she said.
Sara said while AI is here to stay, ultimately human judgment must remain a constant.