AI Made Friendly HERE

The Power of Prompting – Microsoft Research

Today, we published an exploration of the power of prompting strategies that demonstrates how the generalist GPT-4 model can perform as a specialist on medical challenge problem benchmarks. The study shows GPT-4’s ability to outperform a leading model that was fine-tuned specifically for medical applications, on the same benchmarks and by a significant margin. These results are among other recent studies that show how prompting strategies alone can be effective in evoking this kind of domain-specific expertise from generalist foundation models.  

A visual illustration of Medprompt performance on the MedQA benchmark. Moving from left to right on a horizontal line, the illustration shows how different Medprompt components and additive contributions improve accuracy starting with zero-shot at 81.7 accuracy, to random few-shot at 83.9 accuracy, to random few-shot, chain-of-thought at 87.3 accuracy, to kNN, few-shot, chain-of-thought at 88.4 accuracy, to ensemble with choice shuffle at 90.2 accuracy.Figure 1: Visual illustration of Medprompt components and additive contributions to performance on the MedQA benchmark. Prompting strategy combines kNN-based few-shot example selection, GPT-4–generated chain-of-thought prompting, and answer-choice shuffled ensembling.

During early evaluations of the capabilities of GPT-4, we were excited to see glimmers of general problem-solving skills, with surprising polymathic capabilities of abstraction, generalization, and composition—including the ability to weave together concepts across disciplines. Beyond these general reasoning powers, we discovered that GPT-4 could be steered via prompting to serve as a domain-specific specialist in numerous areas. Previously, eliciting these capabilities required fine-tuning the language models with specially curated data to achieve top performance in specific domains. This poses the question of whether more extensive training of generalist foundation models might reduce the need for fine-tuning.

In a study shared in March, we demonstrated how very simple prompting strategies revealed GPT-4’s strengths in medical knowledge without special fine-tuning. The results showed how the “out-of-the-box” model could ace a battery of medical challenge problems with basic prompts. In our more recent study, we show how the composition of several prompting strategies into a method that we refer to as “Medprompt” can efficiently steer GPT-4 to achieve top performance. In particular, we find that GPT-4 with Medprompt: 

  • Surpasses 90% on MedQA dataset for the first time
  • Achieves top reported results on all nine benchmark datasets in the MultiMedQA suite
  • Reduces error rate on MedQA by 27% over that reported by MedPaLM 2 

Two charts. The chart on the left compares the performance of models using no fine-tuning or intensive fine-tuning on the MedQA benchmark. GPT-4 (Medprompt) achieves the highest result at 90.2 using no fine-tuning. Med PaLM 2 achieves 86.5 using intensive fine-tuning. These are followed by GPT-4 base at 86.1 (no fine-tuning), GPT-4 (Simple Prompt) at 81.7 (no fine-tuning), Med PaLM at 67.2 (intensive fine-tuning), GPT-3.5 base at 60.2 (no fine-tuning), BioMedLM at 50.3 (intensive fine-tuning), DRAGON at 47.5 (intensive fine-tuning), BioLinkBERT at 45.1 (intensive fine-tuning), and PubMedBERT at 38.1 (intensive fine-tuning). The chart on the right compares GPT-4 (Medprompt), Med PaLM 2, and GPT-4 (Simple Prompt) model performance on medical challenge problems. GPT-4 with MedPrompt achieves state-of-the-art results on MedQA US (4-option), MedMCQA Dev, PubMedQA Reasoning Required, MMLU Clinical Knowledge, MMLU Medical Genetics MMLU Anatomy, MMLU Professional Medicine, MMLU College Biology, and MMLU College Medicine outperforming Med PaLM 2, and GPT-4 (Simple Prompt).
Figure 2: (Left) Comparison of performance on MedQA. (Right) GPT-4 with Medprompt achieves state-of-the-art performance on a wide range of medical challenge problems.

Ideas: Exploring AI frontiers with Rafah Hosn

Energized by disruption, partner group product manager Rafah Hosn is helping to drive scientific advancement in AI for Microsoft. She talks about the mindset needed to work at the frontiers of AI and how the research-to-product pipeline is changing in the GenAI era.

Many AI practitioners assume that specialty-centric fine-tuning is required to extend generalist foundation models to perform well on specific domains. While fine-tuning can boost performance, the process can be expensive. Fine-tuning often requires experts or professionally labeled datasets (e.g., via top clinicians in the MedPaLM project) and then computing model parameter updates. The process can be resource-intensive and cost-prohibitive, making the approach a difficult challenge for many small and medium-sized organizations. The Medprompt study shows the value of more deeply exploring prompting possibilities for transforming generalist models into specialists and extending the benefits of these models to new domains and applications. In an intriguing finding, the prompting methods we present appear to be valuable, without any domain-specific updates to the prompting strategy, across professional competency exams in a diversity of domains, including electrical engineering, machine learning, philosophy, accounting, law, and psychology. 

At Microsoft, we’ve been working on the best ways to harness the latest advances in large language models across our products and services while keeping a careful focus on understanding and addressing potential issues with the reliability, safety, and usability of applications. It’s been inspirational to see all the creativity, and the careful integration and testing of prototypes, as we continue the journey to share new AI developments with our partners and customers.

A chart shows GPT-4 performance using three different prompting strategies on out of domain datasets. GPT-4 out performs zero-shot and five-shot approaches across MMLU Machine Learning, MMLU Professional Psychology, MMLU Electrical Engineering, MMLU Philosophy, MMLU Professional Law, MMLU Accounting, NCLEX RegisteredNursing.com, and NCLEX Nurselabs.Figure 3: GPT-4 performance with three different prompting strategies on out-of-domain datasets. Zero-shot and five-shot approaches represent baselines.

Originally Appeared Here

You May Also Like

About the Author:

Early Bird