The release of ChatGPT in late 2022 signaled a tipping point for artificial intelligence, bringing it into the public consciousness at a scale we hadn’t seen before. Two years later, generative AI is everywhere, and its impact on how we live, work and learn is unfolding all at once. In that same timeframe, I’ve observed a transformation — not just in technology but in how we understand knowledge, autonomy and learning itself.
Generative AI isn’t one thing or another; it’s both. It has the power to erode autonomy, but it can also enhance it. It can simplify tasks, but it can deepen thinking. Like so many other transformative technologies, the impact of AI depends on what we choose to do with it. This realization has brought us to a crucial moment. The period of discovery is behind us, and we are now entering a phase where decisions must be made — security, ethics, access — just about everything is on the table.
A recurring theme I’ve encountered is the reluctance of both educators and students to openly discuss their use of AI. For many, engaging with AI feels like a guilty pleasure — a shortcut that must be kept hidden. I’ve spoken to faculty who quietly use AI to streamline lesson planning or spark new ideas for research but hesitate to share these practices with colleagues. Similarly, students may hold back from admitting they’ve used AI to enhance their learning or support their assignments. They fear accusations of cheating or laziness.
This shame promotes a culture where the potential for innovation is stifled, and opportunities for meaningful collaboration are missed. Even more troubling, by ridiculing students’ use of AI, we risk teaching them that the tools they will need to navigate the real world are inappropriate in the classroom. How can we expect them to square that mixed message? This lack of transparency undermines the spirit of exploration and learning that should define education.
Yet the deeper question is: Why do we feel this way? Is it because generative AI challenges us in profound ways? Unlike tools of the past, it forces us to reconsider the acquisition of knowledge and the gatekeepers or facilitators. It may challenge our epistemological beliefs about learning. Traditionally, learning has been about the accumulation of expertise and the eventual mastery of a subject. Now, AI offers a shortcut to knowledge, disrupting that model and reshaping the process of learning. How this shortcut works, however, is still up for consideration.
The disruption is as unsettling as it is exciting. Faculty and institutions alike are grappling with how to balance ethics and practicality. Some are quick to call for restrictions, while others race to embrace the possibilities. This debate isn’t new; it echoes the early days of the internet. Could we have banned the internet when it first emerged? Imagine what we would have lost. Generative AI, like the internet, is not optional. It’s foundational to the world our students will inherit.
But as higher education debates AI’s ethical implications, another critical voice is missing: the working world. While colleges and universities wrestle with policies and curricula, industries are adapting AI to reshape their processes, innovate responsibly, and meet the fast pace of societal change. Higher education is often siloed from these conversations, and that disconnect has consequences. Aligning educational initiatives with industry needs isn’t about succumbing to market pressures; it’s about ensuring students are prepared for meaningful participation in the world they will enter.
At the same time, higher education is rushing to create AI centers and initiatives. These efforts, though often well-intentioned, can feel reactive — driven more by enrollment pressures than by thoughtful alignment with institutional missions. One thing has become clear: traditional strategic planning, which can take a year or more, is ill-suited to the pace of generative AI’s evolution. We need living, flexible strategies that adapt as technology and its societal implications unfold.
The same applies to education policy. Can we reimagine policies as frameworks that evolve with our needs rather than as rigid rules? This shift demands that we see education not as a fixed institution but as the starting point for meeting the challenges of tomorrow. The university of tomorrow must be a pliable place — a scaled-up lab in support of human intellect.
Generative AI is hinting at new vistas for learning and applying knowledge. Within that movement, there is a growing belief that the value of education will withstand these changes. If AI shows signs of promoting reductive thinking, disinformation or unreliable results, educators will not support it.
There are lessons here for us all. First, we must embrace openness. The shame and stigma surrounding AI use only hinders our ability to innovate. Second, we must become more agile, rethinking how we plan and adapt in a fast-changing environment. And finally, we must resist binary thinking. AI is not inherently good or bad; its value depends on how we use it.
Generative AI is not just a tool; it is a force reshaping how we live, work and learn. By confronting its challenges thoughtfully and engaging with its possibilities openly, we can ensure that AI becomes part of a methodology that strengthens education and amplifies human potential — without losing sight of the heart of our humanity and intellectual integrity. It is time to open the black box and find our path forward in the process of learning.
Jessica A. Stansbury (jstansbury@ubalt.edu) is the director of teaching and learning excellence at the University of Baltimore’s Center for Excellence of Learning, Teaching and Technology. Her research focus includes innovative teaching methods, perceptions about teaching and learning and emerging technologies.