
‘Developing a reliable AI system requires a lot of hard work, careful engineering and a deep understanding of the problem domain,’ says DCU’s Dr Sunder Ali Khowaja.
Dr Sunder Ali Khowaja is a strong believer in openness and transparency. “I believe in the importance of open science and making my research accessible to a wider audience,” he tells SiliconRepublic.com.
“As researchers, we have a responsibility to communicate our findings clearly and accurately to the public and to help people distinguish between credible and non-credible sources of information.”
Khowaja grapples with this issue of transparency every day in his work as he tackles the ‘black box’ of AI models and questions of data privacy and ethics.
He is part of a team of international researchers who, earlier this year, trialled EdgeAIGaurd, a content moderation tool designed to protect minors from online grooming. The tool uses three new agentic large language models (LLMs) to detect threats, analyse contextual information and intervene with protection actions where appropriate.
Khowaja has a bachelor’s degree in telecommunications engineering and a master’s degree in communications systems and network engineering. He completed a PhD in industrial and information systems engineering at Hankuk University of Foreign Studies in Korea, where he focused on ambient intelligence and affective computing. He then worked at universities in Korea and Pakistan, before taking up a position at Dublin City University, where he is now assistant professor in the School of Computing. He is also an investigator at Connect Research Ireland Centre for Future Networks and an academic collaborator at Adapt Research Ireland Centre for AI-Driven Digital Content Technology.
Here he tells us more about his research.
Tell us about your current research.
My current research is at the exciting intersection of AI, privacy, computer vision and agentic AI.
My collaborative research group is focused on developing new techniques for privacy-preserving machine learning, LLM security and federated learning, suitable to operate on edge devices. This is a critical area as we increasingly rely on AI systems that are trained on vast amounts of data, some of which can be very personal and sensitive.
We are exploring novel methods to train AI models without compromising the privacy of the individuals whose data is being used.
One of our key projects involves developing new defences against model inversion attacks, where an adversary tries to reconstruct the private training data from a trained model.
We are also very interested in the broader area of responsible generative AI and sustainable AI. We are working on creating AI models that are not only accurate and efficient but also fair, transparent and ethically sound.
My collaborative research team includes researchers from Ireland, US, South Korea, Spain, UK, Taiwan, UAE, China, India and Pakistan.
In your opinion, why is your research important?
The rapid advancements in AI are transforming our world, but they also bring new challenges.
My research on privacy-preserving machine learning is crucial for building trust in AI systems.
Without strong privacy guarantees, people will be reluctant to share their data, which will, in turn, stifle innovation.
I believe my work will have a significant impact on various domains, including healthcare, finance and social media. For example, in healthcare, our privacy-preserving techniques could enable hospitals to collaborate and train more accurate diagnostic models without sharing sensitive patient data.
In the long run, I hope my research will contribute to a future where AI is used for the capacity building of the Irish workforce in the usage of AI methods in a responsible, secure and ethical manner.
What inspired you to become a researcher?
I have always been fascinated by how things work. The curiosity of understanding the mechanics of how things work led me to pursue a degree in engineering. During my postgraduate studies, I was introduced to the world of research and was immediately hooked.
I realised that research is not just about finding answers but also about asking the right and innovative questions.
The thrill of discovering something new and potential to make a real-world impact are what motivate me every day.
I do not have a single ‘spark’ moment, but rather a series of experiences that have fuelled my passion for research.
What are some of the biggest challenges or misconceptions you face as a researcher in your field?
I would say two of the biggest challenges I face as a researcher is the ‘black box’ and ‘AI security’ problem.
Many state-of-the-art AI models are complex and it’s difficult to understand how they arrive at their decisions. This lack of transparency can be a major obstacle to their adoption in high-stakes applications such as healthcare.
A common misconception is that AI is a magic bullet that can solve any problem. In reality, developing a reliable AI system requires a lot of hard work, careful engineering and a deep understanding of the problem domain.
The ‘AI security’ problem is just making stakeholders understand that AI models are vulnerable to security, which is equally, if not more serious than the data security problem.
Another challenge is the constant need to stay up to date with the latest advancements. The field of AI is moving at a breakneck pace, and it’s a full-time job just to keep up with the latest research.
Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.