AI Made Friendly HERE

Unlocking Effective AI: 5 Expert Tips for Crafting Perfect AI Prompts

Recently, Google’s prompt engineer Lee Boonstra published a 68-page prompt engineering whitepaper with many tips and strategies to craft perfect prompts for LLMs. While the paper is curated for future prompt engineers, we found a bunch of tips in it that regular users can utilize in their day-to-day AI use. This post lists 5 AI prompts tips that will help you get better responses from AI chatbots.

1. Keep it Simple and to the Point

While it’s often advertised that you can talk to LLMs in natural or casual language, it’s mostly to make the product attractive to regular people. For better results, you should keep the prompts concise and easy to understand, focusing on action verbs. Every extra word or complex phrase you add can potentially lead to misinterpretation.

Instead of telling the AI your problem as a story, keep the prompt directly focused on the answer you require. Below are examples to understand better:

Bad prompt: I’m going to be traveling to Tokyo next month for five days, and I’m really interested in seeing historic temples, but I also love modern art galleries. I’ll be staying near Shinjuku Station and would prefer places that aren’t too crowded since I get overwhelmed easily.

Good prompt: Act as a Tokyo travel guide. Recommend 5 historic temples and 3 modern art galleries near Shinjuku that avoid large crowds.

2. Give the AI a Role First

For specialized prompts, it’s better to first tell the AI to act like a specialist in the specific field, like “act as a travel guide” or “you are a kindergarten teacher“. Since LLMs learn from massive text corpora, this will force them to anchor to a specific profession or persona. The model will use role-specific tone, and vocabulary, and limit itself to the role’s scope which will prevent it from going off-topic.

Below are examples of prompts:

Bad prompt: Create a lesson plan about photosynthesis.

Good prompt: You are a kindergarten teacher. Design a 30‑minute lesson plan on photosynthesis with an engaging story, hands‑on activity, and three simple review questions.

Gemini acting as a kindergarten teacher

3. Provide Examples

Whenever possible, try to give multiple examples of how you want the output to be. This will ensure the model provides exactly what you want without deviating from the goals. These concepts are known as one-shot prompt (single example) and few-shot prompt (multiple examples). It’s recommended to provide at least three to five examples if possible.

Of course, examples are not needed for most prompts, but whenever you need to follow a specific structure, they greatly enhance results. For example, if you are feeding it reviews to understand the sentiment, you can provide exactly how it needs to separate the reviews, such as negative, neutral, or positive.

4. Try Step-Back Prompting

Step-back prompting means dividing a prompt into two steps by first asking a principal-level question and then feeding the answer back to it to get a detailed answer. Basically, the first question activates the model’s background knowledge on the topic so it provides a better answer when you ask the real question using the details from the first prompt.

This approach is better than directly asking for the answer, as it “encourages LLMs to think critically and apply their knowledge in new and creative ways”.

To help you understand properly, below we have curated an example that shows this approach in action:

First prompt: What are the core principles of an effective product description?

Gemini listing product description principles

Once you get the answer, use the second prompt (the step-back part).

Second prompt: Using these principles, write a product description for a new smartwatch with the following specs (provide specs).

Gemini providing a full product description of a smartwatch

5. Focus On Instructions Instead of Restrictions

LLMs work better when you instruct them to “do something” instead of telling them to “don’t do something”. Instead of telling the model what are the things it should avoid, it’s better to tell it exactly what to include. If you add restrictions in your prompt, the model may start guessing what is allowed and possibly restrict itself more than needed. Using direct instructions leads to more creative answers.

While the paper does say it’s okay to use constraints when necessary, it is better to use positive instructions instead. Below is an example of the concept:

Bad prompt: Describe our new eco‑bottle. Don’t use superlatives and don’t talk about its price.

Good prompt: Craft a 2‑sentence product description for our eco‑bottle that highlights its 48‑hour insulation and 100% recycled steel construction. Use clear, benefit‑driven language without mentioning price.

Gemini giving a short description of a water bottle

These were some of the tips we found useful for everyday use. That said, there are many other ways to produce better AI responses than just crafting better prompts, such as personalizing the AI.

Originally Appeared Here

You May Also Like

About the Author:

Early Bird