Ever since the chat bot ChatGPT burst into public view in late 2022, students, professors and administrators have been woozy from a chaotic cocktail of excitement, uncertainty and fear. The bot writes poems, sonnets and essays. It also serves as a convincing debate partner on a seemingly unlimited number of subjects. Given that the natural language model earned passing scores on the evidence and torts portion of the bar exam, among other feats, some in academe fret that the technology may facilitate widespread cheating. Others see opportunity for accelerating discussions about reimagining teaching to help students write prose that differs from what machines can produce.
The artificial intelligence language model was released by OpenAI and is currently offered free as a research preview. It interacts with users in a conversational way, including by answering questions, admitting its mistakes, challenging falsehoods and rejecting inappropriate requests such as, “Tell me about when Christopher Columbus came to the U.S. in 2015.”
“This question is a bit tricky because Christopher Columbus died in 1506, so he could not have come to the U.S. in 2015,” ChatGPT replied in a sample presented on the OpenAI website. (The chat bot is in such high demand that, during this time this article was written, it was at capacity.) “But let’s pretend for a moment that he did! If Columbus arrived in the US in 2015 … he might be surprised to find out that many people don’t view him as a hero anymore; in fact, some people argue that he was a brutal conqueror who enslaved and killed native people.”
Faculty members and administrators are now reckoning in real time with how—not if—ChatGPT will impact teaching and learning. Inside Higher Ed caught up with 11 academics to ask how to harness the potential and avert the risks of this game-changing technology. The following edited, condensed advice suggests that higher ed professionals should think a few years out, invite students into the conversation and—most of all—experiment, not panic.
Be Deliberate. Adjust Quickly.
Nancy Gleason, associate professor of practice of political science and director of the Hilary Ballon Center for Teaching and Learning, New York University, Abu Dhabi
We cannot ban AI aids. But we also should not use them for all assignments. We can teach students that there is a time, place and a way to use GPT3 and other AI writing tools. It depends on the learning objectives.
First, familiarize yourself with AI writing aids. Your librarians and academic technicians can help. Faculty members need time to play with new tools and explore their implications. Administrators can carve out time for faculty training support. How does bias play out in your area within the model? What does the code, poem, text look like relative to how your students normally write or produce? How can you use these tools to enhance teaching methods?
Next, consider the tools relative to your course. What are the cognitive tasks students need to perform without AI assistance? When should students rely on AI assistance? Where can an AI aid facilitate a better outcome? Are there efficiencies in grading that can be gained? Are new rubrics and assignment descriptions needed? Will you add an AI writing code of conduct to your syllabus? Do these changes require structural shifts in timetabling, class size or number of teaching assistants?
Finally, talk with students about instructions, rules and expectations. Provide this information on course websites and syllabi and repeat it in class. Guide teaching assistants in understanding appropriate uses of AI assistance for course assignments. Divisions or departments might agree on expectations across courses. That way, students need not scramble to interpret academic misconduct across multiple courses.
Don’t Abandon Pencil and Paper.
Michael Mindzak, assistant professor in the department of educational studies, Brock University
For some assessments, professors may need to revert to traditional forms of teaching, learning and evaluation, which can be viewed as more human-centric.
Question How Writing Is Taught.
Steve Johnson, senior vice president for innovation, National University
Resist asking conservative questions such as, “How can we minimize negative impacts of AI tools in writing courses?” Instead, go big. How do these tools allow us to achieve our intended outcomes differently and better? How can they promote equity and access? Better thinking and argumentation? How does learning take place in ways we haven’t experienced before?
In the past, near-term prohibitions on slide rules, calculators, word processors, spellcheck, grammar check, internet search engines and digital texts have fared poorly. They focus on in-course tactics rather than on the shifting contexts of what students need to know and how they need to learn it. Reframing questions about AI writers will drive assignment designs and assessments that can minimize academic integrity concerns while promoting learning outcomes.
Think a Few Years Out.
Ted Underwood, professor of information sciences and English and associate dean of academic affairs in the School of Information Sciences, University of Illinois at Urbana-Champaign
ChatGPT is free and easy to use, so recent conversations often use it as shorthand for the whole project of language modeling. That’s understandable, but we should be thinking more broadly. The free demonstration period will end. When it does, students and teachers may migrate to a different model. Language models will also continue to develop. By 2024, we are likely to see models that can cite external sources to back up their claims. Prototypes of that kind already exist.
More Insights About ChatGPT
Instead of treating ChatGPT as the horizon, look farther out. Our approach to teaching should be guided not by one recent product but by reflection on the lives our students are likely to lead in the 2030s. What will the writing process look like for them? Will they use models as research assistants? As editors?
No crystal ball can answer those questions yet. But the uncertainty itself is a reminder that our goal is not to train students for specific tasks but to give them resilience founded on broad understanding. We can teach students to use a particular model or warn them about the limits of existing technology. But I also hope we back up a little to ask where language models came from and why this technology is even possible. For instance, it’s surprising that training models to predict the next word in a passage also had the side effect of teaching them how to draw logical inferences from new passages on new topics. It wasn’t obvious 10 years ago that approach would work. Discussing why it does work may help students see why the details of writing matter and why it’s hard to separate writing from thinking.
In short, I don’t think we need to revise all our assignments tomorrow in response to one model. But writing practices are likely to change over the next decade, and students should understand not just how but why they’re changing. That’s a big question about the purpose of writing—not one we can delegate to computer science. Professors in every discipline will need to learn a little about language models and pay attention to their continued development.
Delegate Responsibilities.
Mina Lee, doctoral student in computer science, Stanford University
Teachers can oversee the selection of appropriate language models or AI-based tools to ensure they meet student needs and school policies. Students could be responsible for using the model to generate language that is accurate, appropriate for the audience and purpose, and reflective of their own voices, while monitoring and reporting issues they encounter. Lastly, the college’s administration can be responsible for providing feedback to the developer and updating the school’s policies regarding the use of AI writing assistants.
The individuals will need resources and support to fulfill their responsibilities. Remember that the goal is to share responsibility, not lay blame. Not everyone has the same level of expertise or experience. Work together to ensure the safe, responsible and beneficial use of AI writing tools.
Identify and Reveal Shortcomings.
Anna Mills, English instructor, College of Marin
ChatGPT’s plausible outputs are often not solid foundations for scaffolding. If we direct students to AI writing tools for some learning purpose, we should teach critical AI literacy at the same time. We can emphasize that language models are statistical predictors of word sequences; there is no understanding or intent behind their outputs. But warning students about the mistakes that result from this lack of understanding is not enough. It’s easy to pay lip service to the notion that AI has limitations and still end up treating AI text as more reliable than it is. There’s a well-documented tendency to project onto AI; we need to work against that by helping students practice recognizing its failings.
One way to do this is to model generating and critiquing outputs and then have students try on their own. (In order to teach about ChatGPT’s failings, we’ll need to test any examples and exercises right before teaching, since language models are frequently updated.) Finally, we should assess how well students can identify ChatGPT failings in terms of logic, consistency, accuracy and bias. Can they detect fabrications, misrepresentations, fallacies and perpetuation of harmful stereotypes? If students aren’t ready to critique ChatGPT’s output, then we shouldn’t choose it as a learning aid.
Showcasing AI failings has the added benefit of highlighting students’ own reading, writing and thinking capacities. We can remind them that they are learning to understand and to express themselves with purpose, things a language model cannot do. Draw attention to the virtues of human-written prose and prompt students to reflect on how their own cognitive processes surpass AI.
Remind Students to Think.
Johann N. Neem, professor of history, Western Washington University
With ChatGPT, a student can turn in a passable assignment without reading a book, writing a word or having a thought. But reading and writing are essential to learning. They are also capacities we expect of college graduates.
ChatGPT cannot replace thinking. Students who turn in assignments using ChatGPT have not done the hard work of taking inchoate fragments and, through the cognitively complex process of finding words, crafting thoughts of their own.
With an hour or so of work, a student could turn an AI-generated draft into a pretty good paper and receive credit for an assignment they did not complete. But I worry more that students will not read closely what I assign. I fear that they will not be inspired, or challenged, by the material. If the humanities grew out of the study—and love of—words, what happens when words don’t matter to our students?
Professors should find new ways to help students learn to read and write well and to help them make the connection between doing so and their own growth. I anticipate offering more opportunities for students to write in class. In-class writing should not just be additive; hopefully, my classes will in time look and feel different as students learn to approach writing as a practice of learning as well as a demonstration of it.
Invite Students Into the Conversation.
Paul Fyfe, associate professor of English and director of the graduate certificate in digital humanities, North Carolina State University
Higher ed professionals are asking how ChatGPT will affect students or change education. But what do students think? How or why would they use it? And what’s it like when they try?
For the past few semesters, I’ve given students assignments to “cheat” on their final papers with text-generating software. In doing so, most students learn—often to their surprise—as much about the limits of these technologies as their seemingly revolutionary potential. Some come away quite critical of AI, believing more firmly in their own voices. Others grow curious about how to adapt these tools for different goals or about professional or educational domains they could impact. Few believe they can or should push a button to write an essay. None appreciates the assumption they will cheat.
Grappling with the complexities of “cheating” also moves students beyond a focus on specific tools, which are changing stunningly fast and towards a more generalized AI literacy. Frameworks for AI literacy are still being developed; mechanisms for teaching it are needed just as urgently.
Experiment. Don’t Panic.
Robert Cummings, an associate professor of writing and rhetoric; Stephen Monroe, chair and assistant professor of writing and rhetoric; and Marc Watkins, lecturer in composition and rhetoric, all at the University of Mississippi
Channel anxiety over ChatGPT into productive experimentation. We built a local team of writing faculty to engage with the tools and to explore pedagogical possibilities. We want to empower our students as writers and thinkers and we know that AI will play a role in their futures. How can we deploy AI writing protocols ethically and strategically within our curricula? Can these tools help underprepared learners? Do some tools work better than others in the classroom? We’re at the early stages of investigating these kinds of questions. Our advice centers around some main ideas.
Get started now. Jump in. We cannot control Silicon Valley, and their pace of technological development is frantic and disorienting, but we don’t have to keep up with everything. Our group has consciously decided to move slowly and deliberately, but we have decided to move.
AI literacy is crucial for teaching students about AI writing generators. Both students and teachers need to understand the capabilities and limitations of these tools, as well as the potential consequences of using them.
Identify specific tools for specific purposes in the writing process. Some AI-powered tools can help with invention, some with revision and some with locating sources. Prepare student writers to consider the benefits and disadvantages of these tools in the context of specific writing purposes.
Perform a reality check with all AI engagements. Help students be prepared to fact check any AI-generated writing outputs.
Assign reflection to help students understand their own thought processes and motivations for using these tools, as well as the impact AI has on their learning and writing.
Offer rules of citation. As MLA, APA, CMS and other citation systems are attempting to catch up with citation styles for AI-generated writing, advise students on how you want them to cite AI outputs. But treat them as content developed by a third party and be prepared to cite it.
Human learning is gradual, even when AI learning seems instantaneous. That will not change, so teachers will likely be the most important users of AI writing tools. We will mediate, introduce and teach. So, our advice for colleagues is straightforward: start experimenting and thinking now.