
AI literacy as a subject area has percolated to all levels of education. The identified key skills from elementary (primary) school level to adult education appear to overlap significantly, even though they come with an “age appropriate” qualification. For example, advocates of AI literacy for K12 education listed the need to “understand how AI works, how it can be used to solve problems, how to evaluate and creatively engage with AI, and the potential social impact of AI”. These skills are not so different from those identified in frameworks for higher education, which generally include:
- an understanding of how AI works
- using and applying AI ethically to lead meaningful lives
- analysing and evaluating AI for bias and accuracy
- the ability to create AI solutions for a given problem and contribute to the world.
A key question we therefore need to ask is: how would “age appropriate” AI literacy look for university students, who are expected to be contributing members of society and industry upon graduation?
Consensus is that AI literacy will be an essential requirement for the future world of work. So, students will need a range of skills beyond basic competency. Recent studies, such as one conducted by LinkedIn, suggest that prompt engineering is a skill in demand. However, prompt engineering is specific to current variants of generative AI and just one of many skills that AI-related technology requires. Furthermore, agentic AI (also known as autonomous AI) appears to be the next major development and would require people to develop skills to design AI tools and enhancements that are specific to their organisation’s needs.
AI for learning
Many faculty around the world have deployed generative AI learning assistants for their courses. For example, we have Project NTU AI learning assistants (NALA), where faculty can build their own chatbots based on their course’s pedagogical needs. While the intent is to enable faculty to design and control learning experiences with generative AI, we have also noticed its potential to uncover and develop AI literacy in students. One of our studies reveals that even when a generative AI learning assistant is designed to be Socratic, the way students interact with the assistant would result in significant differences in learning outcomes. Students who raise questions about fundamental concepts and solicit more examples with the AI learning assistant have significant learning gains. Based on this finding, we can educate students on how they should engage with AI for better learning outcomes.
In addition, the way students construct their prompts may result in a non-response. For example, one of our students did not receive a response from the learning assistant when he requested a “cheat sheet”. Had he requested for a “list of formulas”, he would have had the desired response.
This is a learning opportunity to share the guard rails in some of the enterprise GenAI implementations with students. Finally, one of the faculty learning communities has also used NALA to teach prompting skills and ethics as a part of AI literacy.
AI as learning
AI learning assistants may have provided opportunities for students to gain an understanding of AI while learning. However, such uses do not help university students to develop higher-order AI literacy.
Some researchers have connected Bloom’s Taxonomy to higher levels of AI literacies. At the highest level, students would evaluate and create their own GenAI applications. These competencies match the earlier point regarding designing AI tools and enhancements in the future workplace.
Like platforms developed at other universities, Project NALA offers a front-end interface (known as the builder) for faculty to create their own learning assistant. An idea we have is to open the builder up to students to allow them to create their own GenAI assistant as part of our AI literacy curriculum. As they design, configure and test their own assistant, they will learn firsthand how generative AI works. They get to test performance-enhancement approaches beyond prompt engineering, such as grounding the learning assistant with curated materials (retrieval-augmented generation) and advanced ideas such as incorporating knowledge graphs.
The learning activities can also include conducting evaluative studies on how the performance of the GenAI can be enhanced with these techniques and how to better implement ethical principles. Faculty would have access to the changes students make to their learning assistants to provide feedback. This would further enhance students’ AI literacy. In fact, it could be a student-as-partner model, where faculty learn alongside the students.
AI literacy is a skills progression
AI is a transformative technology that is here to stay. As universities support students to build AI literacy, institutions and educators need to reflect on whether our AI literacy programmes are elementary or supporting students to develop the higher-order skills they will need for the workplace. They should have the opportunity to analyse, evaluate and create responsible AI solutions. Offering students the opportunity to build their own AI assistants could be a way forward to develop these much-needed skills.
Fun Siong Lim is head of the Centre for the Applications of Teaching & Learning Analytics for Students (Atlas) in the Institute of Pedagogical Innovation Research and Excellence (InsPIRE) at Nanyang Technological University, Singapore.
If you would like advice and insight from academics and university staff delivered direct to your inbox each week, sign up for the Campus newsletter.