
Artificial intelligence is making our lives easier. Popular AI tools such as ChatGPT, Google Gemini, Claude, and Perplexity AI have all become personal assistants to millions of working professionals and students. These days there is a new AI chatbot with a new use case coming out every fortnight. However, one thing that remains common to all of these chatbots is the quality of prompts that help them come up with the best output.
Prompts, in simple words, are ways in which a user communicates with a large language model (LLM) or chatbot. Interestingly, one does not need a computer science degree to master prompt engineering; it is a skill that anyone can obtain with some practice. In this article, we will share 10 practical and platform-agnostic tips that may help you become a top AI user, boost your efficiency at work, and even unlock your creativity.
The Persona-Task-Context-Format Framework
Since prompting involves communication, think of it as a structured instruction. According to Google, one of the most effective ways of prompting includes four components – persona, task, context, and format. ‘Persona’ means who the AI is acting as, for example, ‘You are a tour guide.’ The task is what it should do – suggest places to try authentic
Story continues below this ad
Although one need not employ all four components in every prompt, they should try to use at least two or three, as this can significantly improve outputs.
‘What happens in vagueness stays in vagueness’
Perhaps the biggest hurdle most users face is the lack of clarity in the responses. For an AI chatbot to generate relevant and contextual answers, one needs to be very specific. Instead of simply inputting, ‘Write a summary,’ one should use a clear prompt like the example below:
“Write a summary of this article in 5 bullet points with a focus on economic implications for the IT sector.” Experts advise using clear verbs such as ‘translate’, ‘rephrase’, ‘summarise’, ‘create’, and ‘compare’. These verbs should be accompanied by helpful context or constraints such as length, audience, tone, etc.
Refine, refine, refine
Prompting is not a one-time task, as most of us would like to believe. According to experts, users should try to treat it as a back-and-forth dialogue. If you don’t like the first answer, you should refine the prompt. If you require the same information in a different format, you can request it. You can ask the chatbot to convert it into a table or show key facts as flash cards. AI can also change the tone of the content to the user’s preference. Prompts can be refined further to make output casual, formal, or even more engaging. Establishing a dialogue and iterating not only improves accuracy but also lets the user know what kinds of prompts can yield the best results.
Story continues below this ad
Examples can work wonders
If you are working on formatting, categorisation, or structured results, experts suggest using examples, as they can lead to the most accurate results. This technique is also known as few-shot prompting, as one is guiding an AI model output by giving a small number of examples or shots within the prompt itself. Here’s a sample prompt:
“This is a sample tweet that combines humour and statistics to advocate for climate awareness. Write 5 more in the same style.”
When you give clear examples to an AI model, it is essentially teaching it to imitate the desired pattern or style. This is particularly useful for writing tasks, data extraction, and any tasks that would need structured output.
Role-play with AI
Several studies have shown that AI performs better when it is assigned a specific identity or role. This can impact the tone, knowledge scope, and the style of output. If you are looking for career advice, assigning the role of a career coach in the prompt is advisable. For example – “Act as a career coach. Help me write my resume summary for the role of a deputy editor.” One can also specify tone; for example, write this as a motivational coach using a humorous and informal tone. It needs to be noted that the clearer the persona, the sharper the results.
Chain-of-thought (CoT) for better reasoning
Now that ChatGPT, Gemini, and DeepSeek offer reasoning, they have become adept at complex reasoning or problem-solving tasks. One simply needs to ask the AI to ‘think step by step’ or explain the rationale behind its output. This is called chain-of-thought prompting. A simple example would be, “Explain the life cycle of a butterfly. Think step by step in different stages.” Chain of thought prompting can lead to more accurate and often logically sound answers. This is useful, especially in math, coding, or logic-based queries.
Story continues below this ad
Feed it your files
This is only advisable under certain circumstances and one should practice caution while uploading sensitive documents. Those using ChatGPT, Gemini, or Claude with file upload capabilities have the option to reference their own data directly. For example, “Use the attached document and create a summary of the key findings in bullet points.” Often, providing context from your own materials makes outputs more relevant and personalised.
Test, learn, repeat
It needs to be noted that prompt engineering is trial and error, meaning what works once may not work in every scenario. It is a good idea to document your best prompts, track what settings work, and most importantly, stay curious. For repeated tasks like report writing, brainstorming, or templates for customer support, reusing effective prompts that have been saved can save time.
AI is ubiquitous, but remember it is not here to replace you; rather, it is here to help you amplify your critical thinking. It doesn’t matter what line of work; better prompting skills lead to better output, essentially making working with AI faster, easier, and fun.