AI Made Friendly HERE

How to prompt on OpenAI’s new o1 models

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More

OpenAI’s latest model family, o1, promises to be more powerful and better at reasoning than previous models. 

Using GPT-o1 will be slightly different than prompting GPT-4 or even GPT-4o. Since this model has more reasoning capabilities, some regular prompt engineering methods won’t work as well. Earlier models needed more guidance, and people took advantage of longer context windows to provide the models with more instructions.

According to OpenAI’s API documentation, the o1 models “perform best with straightforward prompts.” However, techniques like instructing the model and shot prompting “may not enhance performance and can sometimes hinder it.” 

OpenAI advised users of o1 to think of four things when prompting the new models:

  • Keep prompts simple and direct and do not guide the model too much because it understands instructions well
  • Avoid chain of thought prompts since o1 models already reasons internally
  • Use delimiters like triple quotation markets, XML tags and section titles so the model can get clarity on which sections it is interpreting
  • Limit additional context for retrieval augmented generation (RAG) because OpenAI said adding more context or documents when using the models for RAG tasks could overcomplicate its response

OpenAI’s advice for o1 vastly differs from the suggestions it gave to users of its previous models. Previously, the company suggested being incredibly specific, including details and giving models step-by-step instructions, o1 will do better “thinking” on its own about how to solve queries. 

Ethan Mollick, a professor at the Wharton School of Business at the University of Pennsylvania, said in his One Useful Thing blog that his experience as an early user of o1 showed it works better on tasks that require planning, where the model concludes how to solve problems on its own. 

Prompt engineering and making it easier to guide models 

Prompt engineering, of course, became a method for people to drill down on specifics and get the responses they want from an AI model. It’s become not just an important skill but also a rising job category. 

Other AI developers released tools to make it easier to craft prompts when designing AI applications. Google launched Prompt Poet, built with the help of Character.ai, which integrates external data sources to make responses more relevant. 

o1 is still new, and people are still figuring out exactly how to use it (including me, who has yet to figure out my first prompt). However, some social media users predict that people will have to change how they approach prompting ChatGPT. 

still a working theory, but prompt engineering will be a relic

llm’s wont not need them, as their intelligence increases over time

— ☀️ soyhenry.eth ⌐◨-◨ (@soyhenryxyz) September 12, 2024

VB Daily

Stay in the know! Get the latest news in your inbox daily

Thanks for subscribing. Check out more VB newsletters here.

An error occured.

Originally Appeared Here

You May Also Like

About the Author:

Early Bird