If one were to solicit a list of the developments most often on the minds of educators today, it’s almost certain that artificial intelligence would be near the top. After all, AI innovations like Chat GPT imply great potential to transform learning in the 21st century while at the same time portending immense risks. Can AI reshape higher education to put more agency in the hands of students? Perhaps. Will reliance on machine learning lead to the stagnation of critical thinking skill development among college graduates? Quite possibly.
It is in this context that Gov. Glenn Youngkin directed his administration in September to examine AI more closely. Specifically, Youngkin’s directive is aimed at seeing what steps might be taken to make the commonwealth a responsible and successful incubator for AI adoption. With higher education, the idea is to prohibit negative uses of AI (e.g., for cheating) and to build robust guidelines for the use of AI tools and the development of AI programming.
People are also reading…
Naturally, a flurry of activity is now underway at Virginia universities and colleges, the backdrop for which is some hope about AI, no small amount of skepticism and a serious downturn in both budgets and enrollment statewide. Virginia’s newfound AI focus, in other words, is the stuff of interesting times.
The key question emerging from this flurry of activity is not easily answered. Can Virginia’s institutions of higher education harness AI effectively to transform higher education for the better?
On the one hand, a deluge of new generative tools is beginning to act as a force multiplier for educators, opening up new space for the provision of accessible content and experiences to students with limited commitment of resources. On the other hand, many of these tools remain unreliable and even offer options for cheating.
Add to this the “bigger” questions involved — for instance, what kinds of investments and collaborations will lead to AI innovations for higher education beyond just using these early tools? — and the path forward presents as understandably unclear.
For Virginia’s higher education ecosystem, optimal innovation around the potential of new AI technologies is most likely to be found with those institutions where two conditions prevail. First, where open, interconnected structures exist to promote the flow of information about AI developments. Second, where prevailing ideas about the provision of education resonate with incoming information about new potential technological possibilities. This is the argument of a recent book I co-authored alongside two military officer-scholars on how large organizations react to and innovate around emergent information technologies.
Our examination understands technology innovation as a kind of communication process. Disruptive technology like AI (not unlike the telegraph, radar or the internet) opens new pathways within which information can flow and the objectives of an institution might be realized, leading to the transformation of both institutions and their missions.
But new potential often goes unrealized, not because the technology falls short but because the people, communities and organizations do. Insular institutions led by inflexible stakeholders tend to produce tribal visions of technology innovation, with fragmented and fragile ideas about how to use new capabilities that have little staying power.
And it is not enough to just cultivate collaborative institutions or to hire visionary leaders; both are required. Visionaries who lead insular organizations often produce concepts of new technology usage that are resilient but misguided.
The makers of BlackBerry devices, for instance, famously championed the mobile internet device revolution before the days of the smartphone but refused to move away from legacy features (like the physical keyboard) that weren’t user-friendly. The company’s post-iPhone decline is now legendary. And open organizations with inflexible leaders often produce something worse: cheap investments in numerous initiatives designed to see what sticks. Often, nothing does.
For Virginia’s universities, the danger here is all too real. Academia is almost intrinsically tribal in its disciplinary divisions. Not only does this produce intra-group conflict for relevance and student enrollment; it is also particularly susceptible to the pressures of budget shortfalls and social change, both of which define the challenges of higher education today.
There is also, of course, much about which we might be optimistic, including successful consortiums between Virginia institutions in areas as diverse as cybersecurity, climate change and social equity. But there is little doubt that we are at a critical juncture for harnessing AI’s potential.
So what should be done? In short, a lot and a little. Virginia’s universities and colleges must move to take advantage of all AI presently has to offer without committing to high-level images of what might be. Over-commitment and over-investment rarely correspond to eventual innovation, but regularly produce ill-fitting approaches that are hard to retreat from.
The answer is cross-campus and inter-institutional collaborations that emphasize AI that is designed to fail, what are often called “attritable” outcomes. This means low-cost solutions for pressing higher education challenges that are designed to be replaced or updated within a short time frame, say, under two years. Conventional investments in basic science/technology development and ongoing conversation about AI ethics must continue. But, otherwise, public institutions should only fund AI-related capabilities that have immediate classroom, research or service payoff s.
And this isn’t just about cost. It’s about letting bad ideas die in the crucible of experimentation and avoiding commitment to “exquisite” AI solutions for higher education challenges that are the most likely to come with a high cost and hard-toexit contract commitments. If these conditions are embraced, Virginia stands to lead the country in the use of AI to transform higher education. If not, Virginia risks becoming an AI straggler.
Christopher Whyte is an assistant professor of homeland security and emergency preparedness at VCU’s L. Douglas Wilder School of Government & Public Affairs. He is author of several books on cyber conflict and co-author of “Information in War,” a recent book on AI and military innovation. Contact him at cewhyte@vcu.edu.