AI Made Friendly HERE

Prompt Engineering is Different for Open Source LLMs

A few days ago, Meta AI introduced ‘Prompt Engineering with Llama 2‘, a new resource created for the open source community, which is a repository for the best practices for prompt engineering. Even Andrew Ng’s DeepLearning.AI recently released a course on this, called Prompt Engineering for Open Source LLMs. IBM, Amazon, Google, and Microsoft, have all been offering similar courses on prompt engineering for open-source models. 

Prompt engineering was one of the most talked about professions in 2023. As companies adopted OpenAI’s ChatGPT in different ways, they also got busy hiring experts who could prompt the chatbot to elicit the right responses, allegedly paying them huge paychecks. 

This also led to the rise of hundreds of prompt engineering courses that everyone wanted to get their hands on. But most of these were for closed source models such as OpenAI’s. Now, as companies adopt open source LLMs such as Meta’s LLaMA and Mistral, it becomes necessary to understand how prompt engineering is different for open source LLMs.

Several corporate entities are developing and testing customer support and code generation applications based on open source technology. These applications are designed to engage with proprietary code unique to the companies, often proving challenging for the general closed-model LLMs created by OpenAI or Anthropic. 

“A lot of customers are asking themselves: Wait a second, why am I paying for a super large model that knows very little about my business? Could I not use just one of these open-source models, and by the way, maybe use a much smaller, open-source model for that (information retrieval) workflow?” shared Yann LeCun in a post on X.

Prompt engineering for open source? 

Recently, Sharon Zhou, co-founder and CEO of Lamini, in partnership with DeepLearning.AI, conducted a course for prompt engineering on open source LLMs. She highlighted how the packaging of an open source model is different from the closed one, which affects the API, in the end, affecting the prompting mechanism. 

“LLMs wear pants, which is its prompt setting,” said Zhou, drawing a crazy analogy to how everyday people come to office wearing pants, and how that is the correct decision to wear them, and a change in this affects the whole system. 

She said that a lot of people get confused between prompt engineering, RAG, and fine-tuning. “Prompting is not software engineering, it’s close to Googling,” she added, and also spoke at length about this in her post on X recently. She added that RAG is prompt-engineering, “do not overcomplicate it”, it is just about retrieving information.

Zhou emphasised the simplicity of prompt engineering, reiterating that prompts are just strings. She compared the process to handling a string in a programming language, making it clear that it’s a fundamental skill that doesn’t require complex frameworks. “Different LLMs & LLM versions mean different prompts,” she added. 

However, she acknowledged that many frameworks tend to overcomplicate prompt engineering, potentially leading to suboptimal results. Zhou explained that in practice, it’s essential to tailor your prompts when transitioning between different LLMs. This is similar to when OpenAI undergoes version changes, leading to confusion when previously effective prompts no longer yield the desired results. 

The same is the case with open source LLMs. Maintaining transparency in the entire prompt is crucial for optimising the model’s performance. Many frameworks face challenges in this regard, often attempting to abstract or conceal prompt details, creating an illusion of managing processes behind the scenes. 

Hits and misses 

When it comes to enterprise adoption, Matt Baker, the SVP of AI strategy at Dell, the company which partnered with Meta for bringing open source Llama 2 to enterprise use cases, said that large models are of no use for companies unless they are made for specific use cases. This is where smaller, specialised, and fine-tuned models come into the picture, giving birth to RAG and prompt engineering. 

Though the reality is that most of the companies would be using open and closed source LLMs for different use cases, the majority of information retrieval is now dependent on APIs and open source models, fine-tuned with their data, which is why companies need to adapt to learn how to prompt models precisely and give accurate information.

To put it in Zhou’s words, always put the right pants on!

Originally Appeared Here

You May Also Like

About the Author:

Early Bird