
Three Harvard faculty members debated whether generative artificial intelligence would be a useful tool â or a perilous shortcut â in scholarship and teaching at a March 13 panel.
The panel â the first in a four-part series evaluating the effects of AI on the Faculty of Arts and Sciencesâ educational mission â was moderated by Sean D. Kelly, the dean of the FASâ Arts and Humanities division.
Kelly probed the faculty panelists on where and why they thought AI use was appropriate, but he struck a largely bullish tone on his own use of AI as an interlocutor, saying it could be a valuable addition to conversations in the humanities.
âI have to confess at this moment that I use these generative AI tools actually almost every day for large portions of the day,â he said.
Even if generative AI cannot definitively interpret things like the role of ambition in Macbethâs downfall, Kelly said, it can put options on the table â which students, instructors, and researchers can then use to develop their own conclusions.
âI love listening to its answers and trying to figure out whatâs wrong with them, or how they make me think different things about what I ought to be exploring,â Kelly said. âIt spurs you to further create a thought that you might not have had if you didnât have this response.â
The panelists agreed that AI could open up new research frontiers, both in their fields and across disciplinary boundaries. Matthew Kopec â the program director of Embedded EthiCS, which creates modules on ethics for computer science courses at Harvard â said he thought AI was already advancing some fields of humanistic research.
âThere are humanists in the room who have built big databases to build taxonomies of story themes or of bibliographical names in ancient Chinese literature,â said Kopec, who is a Philosophy lecturer. âI think that area of digital humanities is very rich.â
Michael P. Brenner, a professor of Applied Mathematics, Applied Physics, and Physics, said the availability of AI tools makes it easier for students to quickly solve algebra problems, rather than wading through them slowly or getting lost in calculations.
âThey can try more things, so they will make discoveries faster,â he said. AI tools, he added, have âthe potential to raise the level of classesâ by allowing students to learn more advanced subject matter without needing to painstakingly master elementary techniques.
That creates a flip side, too, he said: If students can use AI to spit out answers, how will they ever learn those fundamental techniques?
But University Professor Gary King, a social scientist and statistician who holds Harvardâs highest faculty rank, said he thought that AI tools would allow scientists to discard outdated methods and approach new questions.
âWeâre no good at arithmetic anymore. We donât need to be,â King said. âI think thatâs probably a good thing. It frees up our very limited cognitive capacity for other things.â
âYou should be the kind of person that uses whatever the best tools are to meet the next set of problems,â he said.
But Kelly said that even if AI was an effective way to generate answers, that was not always the point. Instead, he said, students often learn skills to change themselves and to understand the answers they arrive at.
âSuppose I saw a student of mine running along the Charles River, and they were huffing and puffing,â Kelly said. âAnd I said, âheavens, why are you running? Thereâs a perfectly good motorized vehicle that can get you from A to B. It does it faster. It does it more efficiently. It does it better. Why donât you just use that?ââ
That question, Kelly said, would be misunderstanding why the student runs: âHeâs not running to get from A to B, heâs running to transform himself. Heâs running to change his body and his way of encountering the world.â
The panelists also discussed whether they thought it was appropriate for instructors to use AI to draft recommendation letters.
Kopec said he thought using AI to generate a first draft could subtly alter the tone of a letter. He said suggestions from predictive text in email applications and on smartphone keyboards could explain the prevalence â or, perhaps, overuse â of exclamation points in emails.
âEmails from 10 years ago had no exclamation points,â Kopec said. âNow, if you donât have an exclamation point, someoneâs going to check up on you â and so it actually does affect the tone.â
Brenner said he worried instead that AI-generated recommendation letters might sound every bit as convincing as letters hastily written by a professor, but would not actually provide an expert evaluation of their subjectsâ academic work.
âIâm much less worried about the tone than about the content,â he said.
Several of the speakers said that the effects of generative AI use depended on usersâ intent and expectations, not just the nature of the models themselves.
Kopec said users need to recognize that, when they use generative AI, âthe model doesnât really care whether what itâs putting out is true or false.â
âBut I think a lot of people actually donât have that in mind when theyâre using these tools,â he added.
Kelly acknowledged that what drives him to use AI tools â his constant questioning â âmight be different from what students would want.â
âI have this kind of problem where questions are on my mind all the time,â he said. âThatâs why I am where I am. But my wife is so happy Iâm not constantly asking her my questions.â
âStaff writer Ellen P. Cassidy can be reached at ellen.cassidy@thecrimson.com. Follow her on X at @ellenpcassidy.