AI Made Friendly HERE

How to Parse Value from Large Language Models in 2025

The Gist

  • Prompt significance. Prompt engineering is crucial in formulating questions for AI-based solutions, guiding users to leverage AI effectively in marketing.
  • Enhancing experience. By addressing overlooked data details and employing transfer learning, prompt engineering improves user experiences and response accuracy.
  • Marketer adoption. As AI becomes more prevalent, marketers must learn prompt engineering techniques to optimize results and make informed decisions based on AI-generated data.

With all the super-duper global excitement about AI, especially among content marketers, you will likely hear the word “prompt” repeatedly.

Prompts, the key instruction for large language models, are how people talk to their AI of choice. But how do people plan what to say to their AI? The starting point is prompt engineering.

The word “engineering” conjures up the idea of technical expertise, and marketers are facing more things to engineer through AI. Marketers continue to find recommendations for what to do with AI solutions. Many of those solutions revolve around prompts.

The latest features in AI solutions and agents are leading analysts and managers to a deeper application of prompt engineering. Prompt engineering gets into the methods for formulating questions when learning AI-based solutions. As such, prompt engineering serves as the guiding principle for understanding how to best leverage AI.

In this post, I will examine the fundamentals of prompt engineering and explore how marketers can incorporate it while keeping customer experience as a priority.

Table of Contents

Prompt Engineering in 2025: Key Statistics

In 2025, the global prompt-engineering market is estimated at $505 billion, projected to reach US $6.5 trillion by 2034, according to Precedence Research.

LinkedIn job postings referencing “prompt engineering” have surged 434% since 2023; certified prompt engineers earn 27% higher salaries, and 68% of firms now provide training in the skill.

Among nearly 1,900 marketers surveyed, 62% said their firm does not train employees on prompting, though 40% are in “experimentation” phase and 26% in “integration” phase of AI adoption, according to the Marketing AI Institute.

70% of AI engineers update prompts monthly or more frequently—and yet 31% still lack structured prompt‑management tooling, according to Amplify Partners.

A 2025 study found 78% of AI project failures stem from poor human‑AI communication—successful teams report ~340% higher ROI compared to ad‑hoc prompting, according to ProfileTree.

The Role of Prompt Engineering in Marketing

Prompts are essentially words interpreted as instructions for a language model. They can be conveyed in various forms, such as a brief question, a paragraph, a bulleted list or a description. Prompts are designed to resemble natural speech, making them more user-friendly than typing characters in a text window.

Since the evolution of ChatGPT, marketers have used prompts as a go-to to assist marketing campaigns and marketing strategy. But it hasn’t quite let them off the hook. Prompt engineering is a useful marketing skill and tactic, but C-Suite leaders still expect measurable results from marketing departments and chief marketing officers.

According to the 2025 State of the CMO report, 69% of marketing leaders say their leadership now expects quantifiable, measurable results for everything their department does—up from 59% just two years ago.

The Types of Prompts

When using Gemini or ChatGPT, you provide instructions either as an imperative, like “Divide 1,245 by 38,” or as a question, such as “What is a conversion rate?” The model typically interprets the words in segments: instructions, context and input data. Context and input data (or a provided example) help refine the prompt, ensuring the model understands the specifics. Once the segments are identified, the model generates a response.

The Steps

Often refining a prompt message involves multiple steps, incorporating various prompt engineering formats along the way. One increasingly popular format is Chain of Thought (CoT) prompts. CoT prompts consist of a series of intermediate steps that guide the language model toward the final output. They are particularly effective for answers that necessitate multiple steps to acquire the correct details. It’s a thought process akin to a decision tree, but the results are presented as brief texts rather than a graphical representation.

The Subcategories

There are several key subcategories of CoT: zero-shot, one-shot and few-shot. A shot refers to an example provided to illustrate the desired output. This technique aims to guide the model in explaining its reasoning, thereby adding a minimal training step to the initial foundation provided by the large language model (LLM).

So, a zero shot would be the example I described earlier (“Divide 1245 by 38”) because there is no example to show the model. A one-shot prompt, in contrast, shows an example of the output needed. Here is what it looks like in Google Gemini and ChatGPT:

Gemini LLM displays its ability to handle multi-step math prompts with approximate decimal outputs and an option to show its reasoning process. Screenshot of a ChatGPT response showing the result of dividing 367 by 24. The user prompt includes “1245 divided by 38 = 32.8. Divided 367 by 24.” ChatGPT responds with “367 divided by 24 equals 15.29 (rounded to two decimal places).”ChatGPT returns a precise answer to a division prompt, rounding to two decimal places without displaying intermediate steps or calculations.

Note that I gave an example with one place behind the decimal. Yet the answer kept several decimal places with both Gemini and ChatGPT.

Improvement on Inaccurate Results

In the early days of AI, users encountered occasional slightly inaccurate results, sometimes on one shot. When I once tried to divide 367 by 15, even with a given example of how I wanted to return an answer with one digit after the decimal, ChatGPT chose 15 and 7/8 — and the 7/8 fraction was not correct. As people are discovering with AI, multiple shots are often needed to get good answers.

Today’s AI has seen significant improvements since ChatGPT. Much of the reason developments have been focused on displaying the reasoning behind the prompt. Gemini, which supplanted Bard, can display how the answer to the division was developed.

The display can clue the user what should be in a follow-up prompt if more information or corrections are needed.

Self-Consistency & Other Techniques

Another prompt engineering technique is self-consistency, a prompt technique that ensures that a set of generated response texts are consistent with each other. This is done by asking the model to generate multiple responses to a prompt, and then selecting the response that is most consistent with the other responses. For example, if I wanted a product description for the “Piero,” a new water-resistant smartphone my imaginary company is launching, I would write the following prompt: 

Screenshot of a user prompt to Gemini LLM asking for a response that is consistent with three previous statements about the Piero smartphone: it is water-resistant, has a long-lasting battery, and takes great photos. The user requests a new response that aligns with these features.A user demonstrates prompt engineering using Gemini by requesting a self-consistent response based on three prior attributes of the fictional Piero smartphone.  

Gemini introduces two approaches to consider. The first is a short-word response. The model took the descriptions in each response and provided a synonymous word that incorporates the three descriptions.

In this example, Gemini returned the sentence “The new Piero smartphone is durable.” 

The second approach, below, is a followup query that creates a response that incorporates the key phrases to be consistently maintained.

 Screenshot of Gemini LLM’s response to a prompt about the Piero smartphone. The model describes the device as durable, water-resistant, with a long-lasting battery and great photo-taking capabilities.Gemini generates a cohesive product description for the fictional Piero smartphone, aligning with prior inputs by emphasizing durability, water resistance, battery life, and photo quality—showcasing the self-consistency technique in prompt engineering.

The ChatGPT version gave a similar response, but with a difference in brevity. In this instance it identified the new response as number four, and provided a brief response for the new response.

Screenshot showing ChatGPT responses to two prompt engineering requests about the fictional Piero smartphone. The first output adds a new feature—fast performance—while the second reformulates existing traits: water resistance, long battery life, and photo quality.ChatGPT demonstrates varying levels of alignment with a prompt requiring consistency. One response introduces a new attribute—fast performance—while another reinforces existing traits, reflecting how prompt specificity influences output fidelity.  

You can ask ChatGPT to provide a longer description, asking for a paragraph or a set word count. The idea can be a draft for a marketing message that highlights the best features of a product or service.

Screenshot of a ChatGPT response elaborating on a previous prompt about the Piero smartphone. The response emphasizes its water resistance, long-lasting battery, and advanced camera system, describing the device as ideal for life on the go.ChatGPT expands on the original prompt with a polished marketing-style paragraph, reinforcing the Piero smartphone’s durability, endurance, and photo quality—demonstrating how prompting can shape tone and narrative depth.

Self-consistency is meant to ensure that the generated response is accurate and complete through the key phrases when incorporating the highlights of the supplied phrases. The latest models are emphasizing brevity, so you can develop additional guidelines in your prompts or make adjustments to the temperature of a model if you have access to the model parameters.

Learning OpportunitiesView all

Related Article: Prompt Engineering Basics for Marketers, Advertisers and Content Producers

Other Prompt Techniques

CoT and self-consistency are not the only variations of prompt engineering techniques. Another type, Least-to-Most, breaks a request to solve a problem into subcategories. The result is a series of prompts-response pairs that allow users to problem solve by identifying the hierarchy of steps the model should take.

So let’s say a prompt is created about a delivery company that needs to optimize their routes between 5 locations: Warehouse (W), Customer A, Customer B, Customer C, and Customer D. The distances between locations (in miles) are given, with the delivery truck required to start at the Warehouse, visit all customers exactly once, and return to the Warehouse.

The least to most method takes a problem and separates it into sections. The prompt starts with a simplified application – just 3 locations in the given problem. Incrementally the prompt add complexity, in this example, one customer at a time is added until a full solution is determined. Chain of density is another recent prompt technique, a variation of self-consistency. Based on a Columbia University research paper called From Sparse to Dense, chain of density is designed to limit LLM response bias towards the leading portion of a given prompt.

To create a chain of density, users craft a single prompt that generates 5 increasingly detailed summaries, while keeping the prompt length constant. The technique is meant to strike the right balance between clarity (favoring less dense entities per token) and informativeness (favoring more dense entities per token). For example, I used Claude to demonstrate what a chain of density would look like for a prompt on orange juice. Claude’s artifacts feature demonstrated an example of a CoD prompt.

A screenshot showing a text-based summary from Claude LLM about orange juice. It includes an initial basic summary and two progressively more detailed summaries. The first detailed summary adds information about commercial production, and the second includes nutritional details like potassium, folate, and antioxidants. A Claude-generated summary of orange juice, expanding from a simple description to detailed explanations of commercial processing methods and additional health benefits such as antioxidant content and cardiovascular support.

The example allows for iterations to continue several times, with each time identifying and adding the most important missing information. This creates an increasingly dense and comprehensive summary of orange juice. The “chain of density” technique originates from efforts to improve general text summarization, so marketers can leverage it by condensing complex information in product descriptions, social media posts, and ad copy into easily digestible formats. All of these techniques reflect the increased capacity for text and media that guide the model’s response. They let marketers style the prompt context and input so the language model delivers the desired result more effectively.

How Prompt Engineering Enhances the User Experience with AI

Address Overlooked Details

Prompt engineering is beneficial for addressing overlooked data details within an LLM. Large language models generally perform well with straightforward prompts, relying on training data to associate words with appropriate instructions. However, some models are trained up to a specific date, so they must rely on known corresponding patterns from the training data to comprehend the request. This is particularly relevant for models trained to a certain date.

Transfer Learning

Prompt engineering employs variations of transfer learning, an effective machine learning technique that enables a model to learn from one task and apply that knowledge to a different, yet related, task. As a result, users can integrate new information about places and events beyond a specific date or apply a heuristic to complex information to generate accurate responses. Without this approach, the non-deterministic nature of certain models might sometimes produce responses based on data that are, in reality, poor responses to prompts.

Differing Ways of Accepting Instructions

Another factor to consider is that prompts for each user interface and supporting platform differ in how they accept instruction, context and input data. MidJourney prompts introduce modifications of these prompt engineering concepts. MidJourney users can customize content type, such as media rendering, by using definitive phrases like “high definition” or adjust composition by incorporating photography or video recording terminology as a prompt detail, such as “Ultra Wide Angle.” These resemble maximum lengths, except the model adapts the output according to photographic specifications rather than length.

As users become familiar with ChatGPT and other AI platforms, they will learn to apply the heuristics generated from a prompt to obtain valuable responses, rather than relying on a zero-shot approach that creates a “genie in a bottle”-like prompt. A positive indicator for marketers is when they and their peers effectively combine prompt responses where feasible. 

Related Article: Top 5 Free Prompt Engineering Courses

What Can Marketers Gain from Prompt Engineering?

Acquiring Foundational Skills

As AI becomes increasingly prevalent, marketers must acquire foundational skills in managing prompts. To fully benefit from prompt engineering, they should view its iterative nature as analogous to the optimization mindset employed in analytics. In analytics, users optimize digital media, such as websites, to improve conversions from digital marketing campaigns. Prompt engineering follows a similar approach, but the optimization is applied to an algorithm instead of digital media. Utilizing AI effectively requires critical evaluation of input to obtain the best responses.

Understanding Risks

Marketers must also understand the potential risks associated with the information or actions derived from the data provided. For example, large language models generate content that seems plausible but may not be grounded in reality. This results in proposed outcomes that appear reasonable but prove to be impractical when implemented. By design large language models do not realize that they don’t know what they don’t know, leading to made-up details at times.

No Genie-in-a-Bottle

Marketers need to conduct thorough assessments of the results in relation to the specific situation at hand. The decimal math examples highlight potential issues that may arise. Repetitive and consequential content influences AI-generated information. As prompts are fine-tuned, users should pay attention to recurring decisions. These patterns can reveal sustainable customer preferences.

Regrettably, many users tend to treat ChatGPT as a genie-in-a-bottle, expecting it to cater to their every demand. However, marketers must be prepared for the AI’s verbose output and stay vigilant in parsing valuable details from irrelevant ones.

Iceberg diagram illustrating five hidden risks of LLM-driven marketing, including impractical outcomes, plausible content, lack of grounding, unawareness of ignorance, and made-up details.This iceberg visual highlights the deeper risks of relying on large language models in marketing—from surface-level impractical outputs to deeper issues like fabricated details and lack of real-world grounding.Simpler Media Group

Related Article: Top 5 ChatGPT Propmpts for Customer Experience Professionals

Prompt Resources

Keeping Track & Conducting Reviews 

By conducting thorough reviews, users can identify best practices for crafting prompts. Marketers should keep track of useful resources to stay updated on the latest strategies. For example, Discord allows users to inquire about prompt engineering, suggestions or feature updates. ChatGPT has a dedicated Discord community focused on learning prompts, staying informed about feature updates and providing support. MidJourney also offers a similar community. Furthermore, there is an overarching Discord group called Learn Prompting, where users can gain insights from other AI tool-users.

Another general resource is the GitHub repository of Democratizing Artificial Intelligence Research, Education, and Technologies (DAIR.ai). The repository explains basic and advanced prompts, as well as examples from the current crop of AI resources. 

How Marketers Use Prompt Engineering

This table outlines practical use cases for prompt engineering in marketing and the benefits they deliver across content, campaigns, and customer experience.

Use Case Prompt Technique Marketing Benefit
Generate blog content ideas Zero-shot prompting Quickly brainstorm with minimal input
Summarize customer feedback Chain-of-Thought (CoT) Improves structured synthesis of complex inputs
Create consistent product descriptions Self-consistency Ensures brand voice alignment across assets
Draft ad copy variations Few-shot prompting Delivers on-brand tone and target audience relevance
Personalize chatbot responses Transfer learning + CoT Enhances relevance and conversational fluidity
Summarize long reports or market analysis Chain of Density (CoD) Condenses dense material for exec summaries and social posts

Final Thoughts on Prompt Engineering

The application of prompt engineering is gaining prominence as AI solutions and features attract attention at breakneck speed. There is growing concern that AI capabilities may surpass human understanding of how to optimally utilize these tools.

In the meantime, marketers aiming to enhance their understanding of AI tools should adopt an experimental approach with specificity. AI is not a broad technology that spans the entire martech stack. The best approach marketers can take to understand AI in customer experience is to understand the prompts designed to operate AI platforms properly.

Editor’s note: This article was updated July 28, 2025, with new information.

Originally Appeared Here

You May Also Like

About the Author:

Early Bird