AI Made Friendly HERE

How I used ChatGPT to develop a prompt-engineering class.

At the end of June, I started teaching a course that was designed by ChatGPT, uses ChatGPT, and is even assessed by ChatGPT. The course itself is also about ChatGPT.

The idea for it came to me earlier this year when I saw news about job opportunities for “prompt engineers,” who could purportedly make more than $300,000 per year. As a professor who teaches about advanced technology transitions, my academic ears pricked up. Was there a skillset here we should be teaching our students? And if so, how should we go about it?

To answer those questions, I did what any self-respecting tech-savvy professor would do—I fired up ChatGPT. Within a couple of hours, the large language model and its trusty chatbot sidekick had helped me outline a course on prompt engineering, complete with learning objectives, assignments, and lecture notes. And it was good—better than anything I could have produced on my own in the same amount of time.

I was pretty shocked—and my respect for the power of ChatGPT only grew as I built out the course. Importantly, there were still tens of hours of work between that initial outline and the final course, and much of what makes the course innovative came from my expertise as an instructor and a tech expert. I consulted colleagues with expertise in generative A.I., and I added learning objectives that ChatGPT had left out—for example, addressing the societal implications of large language models and A.I. chatbots. Still, the vast majority of the course I’m now teaching is designed, written, and executed by ChatGPT. And even though I would be more reticent to use ChatGPT to teach a course about, say, history or science, in this case the tool felt groundbreaking not because it could do my job for me, but because it could do my job with me: I was able to craft exercises and assignments that utilized the platform to transform learning in ways that would have been otherwise impossible.

This first run of the course (we’re already planning to teach it again in the fall) has a modest enrollment of around 70 students, including curious professionals, eager undergrads, and forward-thinking educators. The goal of the six-week online course is to help students from different backgrounds use ChatGPT and other A.I. chatbots more effectively—especially in professional settings. At their own pace, students work through six modules, each with a series of exercises that use ChatGPT to tutor students, expand their understanding, and even assess them. Many of these exercises were developed in collaboration with ChatGPT—in at least one case, we use a ChatGPT-designed exercise with no modifications.

Out of the initial five learning objectives proposed by ChatGPT, I ran with four of them: understanding large language models and their limitations; prompt formulation and refinement; developing and using prompt templates; and prompt and response evaluation. The two “human-derived” additions include addressing emerging trends and exploring responsible innovation and use.

Within each of these learning objectives, the course has specific skills and areas of understanding that were developed through working with ChatGPT. For instance, when addressing prompt formulation and refinement—essentially, how to ask chatbots questions in a way that will give you useful answers—the course uses a framework of ambiguity reduction, constraint-based prompting, and comparative prompt engineering, which was suggested by the chatbot. And when exploring prompt templates, the course follows ChatGPT’s lead by teaching students how to develop reusable prompt formats.

I found ChatGPT to be especially impressive when I started to create content for the module on prompt and response evaluation, which sets out to teach students ways of assessing how effective their prompts are, how accurate and useful the responses are, and how to tweak the former to improve the latter. When I asked ChatGPT to help develop a specific approach students could employ to test the usefulness of prompts and the responses they elicit, it came up with what I believe is a new framework: the RACCCA framework.

RACCCA, according to ChatGPT, stands for relevance, accuracy, completeness, clarity, coherence, and appropriateness. (I was doubly impressed that ChatGPT could imitate a professor well enough to come up with a slightly clunky acronym.) The RACCCA approach helps assess prompts and responses, and can be used to improve the quality of ChatGPT outputs through iteration. The idea is that students can use the framework to see what a response is missing, and then adjust the prompt to improve the response.

Nikki Usher

Sure, A.I. Essays Are Annoying. But Professors Are Grappling With Something Even More Challenging.

Read More

At first glance, ChatGPT’s apparent skill in developing the RACCCA framework might seem like a clear harbinger of the sort of “A.I. education apocalypse” that’s preoccupied the education sector for the past several months. But if developing this course has taught me anything, it’s that the real power of all this lies in human-ChatGPT collaborations. My prompt engineering course wouldn’t be possible without me as the “human in the loop,” curating and crafting ideas suggested by ChatGPT and helping students understand the pitfalls as well as the promise of the technology (one of our first exercises is demonstrating when and how ChatGPT can get things convincingly wrong, and how to navigate this). But it also wouldn’t work without ChatGPT’s ability to vastly augment and scale learning in ways that empower and extend my reach as a mere human instructor.

This becomes clear in the course sections where learning and assessment are led by ChatGPT using exercises that we co-created. For instance, in the last module—which covers the broader societal implications of LLMs—we co-devised an exercise that turns ChatGPT into a highly effective personal instructor. Here are the instructions the students receive:

In a new session, provide ChatGPT (using GPT4) with the following prompt: “Hi ChatGPT. My name is [include full name] and I would like you to act as my personal tutor and teach me about responsible innovation in the context of using ChatGPT. I would like you to cover the field broadly. Please start by asking me a question that helps you gauge my level of understanding. Based on my response, ask me a follow-up question that is designed to increase my understanding. Continue to do this until I show a broad understanding of responsible innovation in the context of using ChatGPT.”

There are obvious dangers with exercises like this, particularly because ChatGPT can (and does) occasionally provide incorrect information—which, again, is something we cover in some depth in the course. But even given these limitations, learning prompts like this can be incredibly powerful; in this case, ChatGPT is a remarkably good tutor when it comes to responsible innovation.

The course also uses ChatGPT to assess student understanding. For example, this exercise from module one is a simple test of understanding of large language models:

Start a new chat with ChatGPT (making sure you are using GPT4) and cut and paste the following prompt: “Hi ChatGPT. My name is [add your full name] and I am in a class where we are learning about the uses and limitations of LLMs. Please ask me three simple questions about the uses and limitations of LLMs to test my understanding. After each question, please wait for my answer before asking the next one. When you have all three of my answers, please provide an assessment of how good they are, and give me a grade from A to C.”

Each time a student runs the prompt, the questions ChatGPT generates are different, and students can repeat it as many times as they like to get the grade they are looking for. Of course, the point isn’t the letter grade but the process of question, response, feedback, and iteration, which leads to self-directed discovery and knowledge reinforcement.

  1. An Influencer Died and Her Team Did Prime Day in Her Honor

  2. Threads Is About to Make All the Money That Twitter Isn’t

  3. The Unlikely Stars of the Actors Strike (So Far)

  4. I Did the Most Thankless Job at Twitter

Over the next few weeks, I’ll watch this process closely as I read more than 2,000 conversations between students and ChatGPT as part of the course. It’s a unique opportunity to see firsthand how students interact with the platform, and how this might spark their curiosity and enhance their learning. Already, I’m beginning to think differently about the power of ChatGPT to transform learning and unlock students’ nascent interests and abilities. It’s almost as if ChatGPT is fine-tuning my brain to be a better instructor by enabling me to see in intimate detail how students interact with it, and how new approaches to learning can leverage this in ways that lie far beyond conventional approaches to education.

And while ChatGPT has made me a better professor, it hasn’t made me fear for the future of my profession—at least not yet. Thinking of ChatGPT as a co-instructor rather than a competitor has helped me create a personalized learning environment and recalibrate how I think about education. That process will look different across subjects and professors—but as ChatGPT’s uses and capabilities grow, I, for one, would rather we be on the same side of the metaphorical lectern.

Future Tense
is a partnership of
Slate,
New America, and
Arizona State University
that examines emerging technologies, public policy, and society.

Originally Appeared Here

You May Also Like

About the Author:

Early Bird