Prompt engineering has taken the generative AI world by storm. The job, which entails optimizing textual input to effectively communicate with large language models, has been hailed by World Economic Forum as the number one “job of the future” while Open AI CEO Sam Altman characterized it as an “amazingly high-leveraged skill.” Social media brims with a new wave of influencers showcasing “magic prompts” and pledging amazing outcomes.
However, despite the buzz surrounding it, the prominence of prompt engineering may be fleeting for several reasons. First, future generations of AI systems will get more intuitive and adept at understanding natural language, reducing the need for meticulously engineered prompts. Second, new AI language models like GPT4 already show great promise in crafting prompts — AI itself is on the verge of rendering prompt engineering obsolete. Lastly, the efficacy of prompts is contingent upon the specific algorithm, limiting their utility across diverse AI models and versions.
So, what is a more enduring and adaptable skill that will keep enabling us to harness the potential of generative AI? It is problem formulation — the ability to identify, analyze, and delineate problems.
Problem formulation and prompt engineering differ in their focus, core tasks, and underlying abilities. Prompt engineering focuses on crafting the optimal textual input by selecting the appropriate words, phrases, sentence structures, and punctuation. In contrast, problem formulation emphasizes defining the problem by delineating its focus, scope, and boundaries. Prompt engineering requires a firm grasp of a specific AI tool and linguistic proficiency while problem formulation necessitates a comprehensive understanding of the problem domain and ability to distill real-world issues. The fact is, without a well-formulated problem, even the most sophisticated prompts will fall short. However, once a problem is clearly defined, the linguistics nuances of a prompt become tangential to the solution.
Unfortunately, problem formulation is a widely overlooked and underdeveloped skill for most of us. One reason is the disproportionate emphasis given to problem-solving at the expense of formulation. This imbalance is perhaps best illustrated by the prevalent yet misguided management adage, “don’t bring me problems, bring me solutions.” It is therefore not surprising to see a recent survey revealing that 85% of C-suite executives consider their organizations bad at diagnosing problems.
How can you get better at problem formulation? By synthesizing insights from past research on problem formulation and job design, as well as my own experience and research on crowdsourcing platforms — where organizational challenges are regularly articulated and opened up to large audiences — I have identified four key components for effective problem formulation: problem diagnosis, decomposition, reframing, and constraint design.
Problem Diagnosis
Problem diagnosis is about identifying the core problem for AI to solve. In other words, it concerns identifying the main objective you want generative AI to accomplish. Some problems are relatively simple to pinpoint such as when the objective is gaining information on a specific topic like various HRM strategies for employee compensation. Others are more challenging such as then exploring solutions to an innovation problem.
A case in point is InnoCentive (now Wazoku Crowd). The company has helped its clients formulate more than 2,500 problems, with an impressive success rate over 80%. My interviews with InnoCentive employees revealed that a key factor behind this success was their ability to discern the fundamental underlying a problem. In fact, they often start their problem formulation process by using the “Five Whys” technique to distinguish the root causes from mere symptoms.
A particular instance is the subarctic oil problem, which involved cleaning up subarctic waters after the catastrophic Exxon Valdez oil spill. Collaborating with the Oil Spill Recovery Institute, InnoCentive pinpointed the root cause of the oil cleanup issue as the viscosity of the crude oil: the frozen oil became too thick to pump from barges. This diagnosis was key to finally cracking the two-decade-old problem with a solution that involved using a modified version of construction equipment designed to vibrate the oil, keeping it in a liquid state.
Problem Decomposition
Problem decomposition entails breaking down complex problems into smaller, manageable sub-problems. This is particularly important when you are tackling multifaceted problems, which are often too convoluted to generate useful solutions.
Take the InnoCentive Amyotrophic Lateral Sclerosis (ALS) challenge for example. Rather than seeking solutions for the broad problem of discovering a treatment for ALS, the challenge concentrated on a subcomponent of it: detecting and monitoring the progress of the disease. Consequently, an ALS biomarker was developed for the first time, providing a non-invasive and cost-efficient solution based on measuring electrical current flow through muscle tissue.
I tested how AI improves with problem decomposition using a timely and common organizational challenge: implementing a robust cybersecurity framework. Bing AI’s solutions were too broad and generic to be immediately useful. But after breaking it down into sub-problems — e.g., security policies, vulnerability assessments, authentication protocols, and employee training — the solutions improved considerably. The snapshots below illustrate the difference. Methods such as functional decomposition or work breakdown structure can help you visually depict complex problems and simplify the identification of individual components and their interconnections are most relevant for your organization.
Problem Reframing
Problem reframing involves changing the perspective from which a problem is viewed enabling alternative interpretations. By reframing a problem in various ways, you can guide AI to broaden the scope of potential solutions, which can, in turn, help you find optimal solutions and overcome creative roadblocks.
Consider Doug Dietz, an innovation architect at GE Healthcare, whose main responsibility was designing state-of-the-art MRI scanners. During a hospital visit, he saw a terrified child awaiting an MRI scan and discovered that a staggering 80% of children needed sedation to cope with the intimidating experience. This revelation prompted him to reframe the problem: “How can we turn the daunting MRI experience into an exciting adventure for kids?” This fresh angle led to the development of the GE Adventure Series, which dramatically lowered pediatric sedation rates to a mere 15%, increased patient satisfaction scores by 90%, and improved machine efficiency.
Now imagine this: employees are complaining about the lack of available parking spaces at the office building. The initial framing may focus on increasing parking space, but by reframing the problem from the employees’ perspective — finding parking stressful or having limited commuting options — you can explore different solutions. Indeed, when I asked ChatGPT to generate solutions for the parking space problem using initial and alternative frames, the former yielded solutions centered on optimizing parking layouts or allocation and finding new spaces. The latter produced a diverse solution set such as promoting alternative transportation, sustainable commuting, and remote work.
To effectively reframe problems, consider taking perspective of users, exploring analogies to represent the problem, using abstraction, and proactively questioning problem objectives or identifying missing components in the problem definition.
Problem Constraint Design
Problem constraint design focuses on delineating the boundaries of a problem by defining input, process, and output restrictions of the solution search. You can use constraints to direct AI in generating solutions valuable for the task at hand. When the task is primarily productivity-oriented, employing specific and strict constraints to outline the context, boundaries, and outcome criteria is often more appropriate. In contrast, for creativity-oriented tasks, experimenting with imposing, modifying, and removing constraints allows exploring a wider solution space and discovering novel perspectives.
For example, brand managers are already using several AI tools, such as Lately or Jasper, to produce useful social media content at scale. To ensure this content is aligned with different media and brand image, they are often setting precise constraints on the length, format, tone or target audience.
When seeking true originality, however, brand managers can eliminate formatting constraints or restraining the output to an unconventional format. A great example is GoFundMe’s Help Changes Everything campaign. The company aimed to generate year-in-review creative content that would not only express gratitude to its donors and evoke emotions but also stand out from the typical year-end content. To accomplish this, they set unorthodox constraints: the visuals would rely exclusively on AI-generated street mural-style art and feature all fundraising campaigns and donors. DALL-E and Stable Diffusion generated individual images that were then transformed into an emotionally charged video. The result: a visually cohesive and striking aesthetic that garnered widespread acclaim.
Overall, honing skills in problem diagnosis, decomposition, reframing, and constraint design is essential for aligning AI outcomes with task objectives and fostering effective collaboration with AI systems.
Although prompt engineering may hold the spotlight in the short term, its lack of sustainability, versatility, and transferability limits its long-term relevance. Overemphasizing the crafting of the perfect combination of words can even be counterproductive, as it may detract from the exploration of the problem itself and diminish one’s sense of control over the creative process. Instead, mastering problem formulation could be the key to navigating the uncertain future alongside sophisticated AI systems. It might prove to be as pivotal as learning programming languages was during the early days of computing.