Giselle Reis, Associate Teaching Professor at Carnegie Mellon University in Qatar (CMU-Q)
Doha, Qatar: The next generation of artificial intelligence professionals will need more than technical know-how, and they must be critical thinkers who understand the social consequences of the tools they build, says a Qatar-based academic expert.
By insisting on strong technical foundations, Giselle Reis, Associate Teaching Professor at Carnegie Mellon University in Qatar (CMU-Q), emphasises the responsibility future AI professionals have to build, assess, and challenge AI systems with informed confidence.
Speaking to The Peninsula recently, Reis said that beyond technical mastery, she believes AI professionals must also possess “sharp critical thinking skills to reason how a certain technology may impact society in different ways,” and be willing to include expertise from other disciplines before deploying systems at scale.
Qatar, she said, offers fertile ground for such responsible innovation. “There has been an increased interest from various sectors in Qatar to integrate AI in their activities,” Reis said. “From ministries aiming to streamline processes, hospitals wanting to aid doctors in diagnoses, and museums creating interactive exhibitions, I have seen several interesting applications for AI.” This momentum, she adds, is reinforced by Qatar’s National Vision 2030 and the Ministry of Communications and Information Technology’s National AI Strategy, both of which aim to position the country as a leader in the region. As a result, “tech companies came to the country, and various AI-related positions were created in local institutions.”
Against this backdrop, CMU-Q’s new Bachelor of Science in Artificial Intelligence program is designed to develop graduates who can meet the country’s fast-evolving needs. Reis emphasises that the degree stands apart from many AI programs because it focuses on foundational knowledge before tools and applications. “Instead of focusing on applications and current tools, the degree is designed to enable students to understand what goes on inside the AI ‘black box’,” she said. This approach allows graduates not only to use AI systems effectively but to critique, modify, and advance them. “They will be in a position to easily understand and learn new techniques and tools, which are released at a faster pace each year,” she added.
The curriculum also ensures students engage deeply with ethics and societal impact. “About a third of the students’ courses are dedicated to ethical and social understanding,” Reis said.
All students take a course on AI and ethics, complemented by a wide range of humanities and sciences that help them understand culture, society, and human decision-making. Even technical courses integrate concepts such as bias in data, privacy, and security issues that are central to the public conversation around AI.
Reis believes CMU-Q’s program will play a direct role in advancing Qatar’s national AI ambitions. “First, CMU-Q’s AI graduates will be highly qualified professionals who can lead the development of AI solutions for the country,” she said. Faculty expertise, she added, also opens doors for partnerships with local institutions seeking to integrate advanced AI into their work. “Finally, we can provide training in various aspects of AI for the general public by holding community workshops and executive education programs.”
Given AI’s increasing relevance across sectors, CMU-Q expects graduates to pursue diverse career paths in Qatar and beyond. Reis points to recent projects involving students and faculty: diagnosing diseases in crops, optimising solar panel cleaning, automating financial trading, assisting teachers and learners, and developing tools to analyse medical images. “The pervasiveness of AI means that most domains can benefit from the technology,” she said.
Still, she cautions against inflated expectations around AI capabilities. “Even though AI can seem very smart at first, we need to keep our expectations in check and understand that what it is doing is not actually ‘thinking’,” she said. Because modern algorithms learn patterns from human-generated data, they are vulnerable to error and bias. For this reason, she stresses the importance of public education and regulation.
“In the short term, the public needs to be educated about potential pitfalls and the field should be regulated to avoid the ill-use of AI tools,” she said. In the long run, she argues, societies must invest in strengthening independent and critical thinking so people can better judge the outputs of emerging technologies.
