
Hyderabad: AI-generated visuals of the recent Pahalgam terror attack, stylised to resemble digital artwork, have raised questions about how technology mediates public memory and violence.
“It’s disturbing,” said Christina Wiremu Brook, speaking exclusively to Deccan Chronicle during the Bharat Summit here on Friday. “We’ve seen this with Israel and Palestine, too. AI-generated images are being used at a high level and shared across media networks that people consider trustworthy.”
As an AI ethics and education strategy expert with the department of education, New South Wales, Australia, Brook has tracked these images, and her concern is technical, ethical and pedagogical.
Telangana has recently begun introducing AI tools in primary education, and Brook offered an understanding of the subject, referring to her experience in Australia. “The first step is for people to understand what AI is and what it is not,” she said. She explained that literacy began at home and must involve everyone from students, teachers, parents, and community members.
“It is not a crystal ball. It does not solve all your problems,” Brook added. “The narratives that are bottled up in the algorithms are largely biased and discriminatory. Not because the technology itself is, but because the world that we live in is.”
This awareness, she argued, is especially vital when young people encounter content on social media platforms. “Right now, they are overwhelmed. They don’t have the tools to critique what they see.” She cited a recent ban in Australia that restricts those under 16 from using social media. “We want them to ask why the policy exists. That is part of the learning.”
New South Wales has also implemented a no phone policy in schools. While that brings its own challenges, Brook believes it is a necessary trade off. Schools are exploring other ways to incorporate tech in classrooms without personal devices.
On misinformation, especially the difficulty in tracing AI-generated content, Brook suggested watermarking to track images while admitting that applying the same to text is more complex. “Some mods are really capable of generating a human like tone, detectors often fail 30 per cent of the time.”
Such misclassifications, she argued, are the result of a rushed integration of AI tools into society, and she consistently returned to the same solution of media literacy. However, she did not dismiss generative AI outright. “The issue is how it’s used and whether the reader is equipped to think through it.”
She also addressed the larger geopolitical concerns of algorithmic control, stating that US, China, and France have the sovereign capability to develop AI models at the highest level which carries influence, especially when algorithms are calibrated to suit national narratives. “As lawmakers, we need to work with these vendors. The weighting of algorithms, filtering systems, content engineering, can’t be left unmonitored.”