AI Made Friendly HERE

Prompt Engineering: Is it a New Programming Language?

Transcript

Luu: As programmers and software engineers, we spend years of mastering programming languages. We want to learn the syntax, the APIs, and so on. What if I told you the most powerful programming language to learn is not a programming language at all, but simply knowing how to ask the right questions. In the world of AI, the ability to ask the right question, to craft a perfect prompt, might be more valuable than knowing how to implement a binary tree or optimize a database query. I want to welcome you to the world of prompt engineering, where words become algorithms and conversations become code. The topic that I want to put up for debate, is prompt engineering a new programming language, or is this just a tool for all of us developers here to use on the side? That’s right, we’re going to have a debate about this question.

The debate style I’m going to use is called Oxford style, which is inspired by the debating society called the Oxford Union Society. It’s been around a long time, from England, about 200 years ago. This style consists of the following format. We’re going to present a motion. We’re going to take the initial vote. This is where everyone will participate in this debate. We have a discussion of the arguments, closing argument, and then we’re going to take a final vote. The way this debate is decided is by the largest shift in the audience votes. Your vote counts, and together, we will decide the outcome of this debate.

The motion we have in front of us is, is prompt engineering truly a next programming language, or is this just fancy wordsmithing for those who cannot write JavaScript? Looks like Dilbert has a question, what is a programming language? You might have some sense of it, but it’s still a relevant question. Let’s take a moment to address this. A programming language is a formal language used to communicate instructions to computers to accomplish tasks. It is a way of communicating with a computer and telling it what to do, at a high level.

Dilbert has another question, which is a good one, just to make sure we’re on the same page. According to Claude and OpenAI, prompt engineering is the practice of crafting precise instructions and inputs for AI language models to optimize the output for specific tasks. The keyword is instructions and tasks. Similarly, for OpenAI, it is the practice of designing and refining input instructions to effectively communicate tasks and objectives to AI language model. You see the similarity? Now we’re clear on what prompt engineering is.

Syntax and Structure (For the Motion)

The debate is going to center around these three areas: syntax and structure, knowledge and expertise, impact and longevity. This first one is going to explore whether prompt engineering possesses the structure, syntax, and formal rules comparable to traditional programming languages. The first argument for the motion is, prompt engineering requires a structured syntax to produce desired output, much like programming languages. To produce an effective prompt, it requires crafting the prompt that follow a structure that consists of the following elements. First, you start with a role or persona for the AI to influence the tone and perspective of the output.

Then next, you want to tell it what you want to do, whether to classify or summarize. Then we follow up with some specific details about a subject that we want the AI model to address. We also want to include any contextual information, like any background information or additional details. Then we’re going to add that with some constraints, in terms of any limitations, word count, style guideline. Finally, with the output format, whether it’s in bullet point, JSON format, and such. Not all of these elements are absolutely required, but this structure will improve the AI model performance. Let’s illustrate this structure through an example. This is a poorly written prompt. It’s almost writing bad code.

A much better prompt will be something like this, where you start with a persona, act as a cybersecurity expert, and the task is to explain, and the content detail is about the top three vulnerabilities. The contextual information that you have here is where organizations face today, where you ground the response in a current enterprise environment. Finally, the constraint in terms of practical examples and recommend mitigation strategies. This structure approach ensure that the AI models interpret the prompt correctly and generate the resulting output, much like how the strict syntax in programming language is.

Argument number two, modular prompts enable usability similar to functions in programming languages. In programming, function or encapsulated code do specific tasks, so we can reuse that many times. We can do the same thing in prompt engineering too. This is an example of a function in prompt engineering. You provide the text, and it will generate the word count, the sentiment, as well as an explanation at the end. Once you enter this prompt, follow that you just enter the topic that you want it to summarize. For example, I enter this prompt or this topic, QCon SF is an awesome conference, and you see the output there, the number of words, the sentiment, and the explanation. You can repeatedly enter different topics to follow this, it will do the same thing. Give you the summary.

Argument number three, emerging best practices and patterns in prompt design that parallel programming constructs. In recent years, research from Google, Meta, and OpenAI, they explore various prompt engineering techniques. A few of those have emerged, such as few-shot, chain-of-thought, tree-of-thought. They found that by structuring prompts in a way that aligns with the AI’s learning capabilities, then it’s more likely to generate relevant, insightful responses. I’m going to take one moment to talk about the middle one, chain-of-thought. The research from Google published this technique in 2022. They want to explore the reasoning capability of AI models, how they can unlock that using a very simple method, just ask the model to think step by step, and amazingly the model spits out its thought. This is similar to how a human would do when we encounter a complex problem.

Any experienced software engineer will very likely be aware of software design patterns. In the field of prompt engineering, we also have patterns. In 2023, a set of folks from the Vanderbilt University, they published this paper that outlined 16 different patterns. I want to touch on one of my favorite patterns, which is the flipped interaction. We typically interact with AI models through a prompt and response manner. We type, send something in, and we wait for the response to come back.

However, for a task or goals in a domain that we’re not familiar with, we might want to ask the AI model to ask us questions, because we might not know those questions to ask, to guide the process of solving certain problems. All we have to do is state our goal or objective, and then ask it to ask us questions.

Let’s take an example. I’m going to enter this prompt into ChatGPT, and as you can see at the end, I ask it to ask me a question, and let’s see what it does. I entered in, and the first question that it asked me is, what’s your target platform? I want to say, both. Do you prefer a single code base, or are you open to a separate code base? Both. What is the preferred backend setup, AWS Lambda or a traditional server? Let’s go with traditional. Preferred database? MySQL. Do you have any specific preferences or restrictions regarding programming languages? Let’s go with Python. What level of importance do you place in real-time features? Not much. Almost done. Do you have a preference on a specific frontend framework? Let’s go with React, of course.

Do you plan to develop mobile application? Yes. There you go. See how smart it is? Hosting backend? Let’s go with AWS. There’s the proposal. You see how useful that is, if you were not familiar with certain domains. It asks us questions. It collects that information, and comes up with a proposal. That’s more to like the various parts of the stack. There you go. That’s the flipped interaction design pattern that we talked about. To summarize, for the for motion of syntax and structure, prompt engineering exhibits its own form of syntax and structure, an abstraction to produce desired output that we saw, like a function. Furthermore, best practices and patterns are emerging.

Syntax and Structure (Against the Motion)

Let’s go to the other side for the against the motion. Argument number one, prompt engineering lacks the formalized syntax and grammar rules inherent in programming languages. Every programming language has a formal defined syntax using a BNF notation, whether that’s assembly language or Python. This enables the compiler to enforce and detect compilation errors up front. Prompts are free-form, flexible natural language and not subject to such constraints. Argument number two, typos don’t prevent AI model from responding. There are no compilers to force prompt correctness. Typos don’t prevent AI from generating a response. That’s not the same as writing code. Argument number two, prompt engineering relies on natural language, which is inherently ambiguous compared to programming languages. Let’s take a look at these two fun examples.

The first one, I saw a man with a telescope. What is this referring to? This could mean that you use a telescope to see the man, or you saw the man who was holding a telescope. An AI model might choose either interpretation when generating a response. Take the second example, the engineers informed the managers that they had failed. Who is they referring to, engineers or manager? Likely the second one. The point is, ambiguity can lead to unintended interpretations and outputs. This is going to pose challenges for developers and users seeking reliable and consistent responses. Code is unambiguous and deterministic, like this piece of code. It will be interpreted consistently across any standard Python interpreter.

Argument number three, probabilistic nature of AI models mean the same prompt can yield different results, unlike consistent code execution. We all know this, with a function, the same input, you get the same output. Prompt of probabilistics. We all know this, the same input, you’re going to see a slightly different response. Like with the prompt, in terms of providing a detailed to-do list, you’re probably going to get a different list each time you type it in. This is going to lead to frustration and inconsistency in business use cases. To summarize against the motion for the syntax and structure, prompt engineering lacks the formal syntax, strict rules, and consistent interpretation that define programming languages. It relies on natural language flexibility, making it fundamentally different from code instructions. Keep that in mind.

Skills/Knowledge and Expertise (For the Motion)

Now we’re going to move on to the next section, skills and expertise. For the motion the first argument is, prompt engineering requires specialized knowledge, similar to programming. Just as programmers, we must understand how APIs or frameworks work. Prompt engineers must understand how AI model generates responses, as well as knowing what knobs we can use to tweak the behavior of the model. You probably know that at a high level, these AI models are known as next word generators. It does not just randomly pick the next word, instead it calculates the probability of each next word over the entire vocabulary. The distribution represents the likelihood of each word being the next one.

The example that you see here with a simple prompt, the sun rises on the, and you get this list of words back with design probability. On top of this, it applies one of the sampling methods to narrow down the selection to the smallest group of words as candidates. Next, I’m going to discuss the two tuning knobs: temperature and sampling method. Think of temperature as a dial, controls how creative the AI models get with its responses. Sampling methods are ways to filter down a large list of words to a smaller set. The temperature setting directly impacts the level of creativity, like I mentioned. It does this by adjusting the probability distribution through a mathematical function. When the temperature is low, the probability gap between words widen, favoring the high probability words. This results in more deterministic generating responses.

When the temperature is high, the probability distribution becomes more even, giving lower probability words a better chance of being selected. A good analogy is the following, at low temperature, AI model acts like a baker who follows the recipe exactly, step by step. When it’s high temperature, it acts like a wild, experimental chef that throws ingredients together randomly and hope that the dish will taste good. The other tuning knob is known as sampling options, and there are two of them. I’m going to touch on only the first one, the Top-P. It acts like a filter or a window.

When the value is high, the window size is larger, so allow more words to come in, to be selected as the next word, and vice versa, when it’s low and it’s narrow, so less words will get in. This will become more clear when we go into a demo. Here’s an animation that you see. The probability is down here. The one in purple is the original probability, and the green one is the one that’s adjusted based on the temperature setting.

Right now, the temperature is high, so the distribution is even, and will more match the original probability. If I lower the temperature, you see the gap is widened between the words with just slightly different probability. Now for the Top-P, remember, it acts like a filter. When the value is high, it includes more words to be candidates, to be selected as the next word. If I lower it, then you see it narrows, so it’s less words, the high probability where it’s more likely going to be selected. That is the demo. This demo was written by an AI coding assistant, I didn’t write that.

Argument number three, a learning curve that parallels learning a programming language. Prompt techniques are diverse and evolving. This report from OpenAI and Microsoft was published only four months ago. They essentially conducted a literature review of over 1500 prompt engineering related papers, and they assembled a taxonomy of the various techniques. The point here is, master prompt engineering is not just about using natural language. There are numerous techniques, and we as software engineers or prompt engineers, must learn and figure out how to best apply to our specific use cases.

Also, these techniques are evolving and new ones are emerging. Unlocking AI reasoning capability is a key area of interest in recent times, and most of you probably only know. Two months ago, OpenAI introduced a new series of models called o1. They are designed to think before answering the questions. Essentially, now, they can perform what we call a system 2 level thinking. What’s important to ask is, how would this affect prompt engineering? A good way to think about is the following.

Previously, we interact with the AI model to solve problems like working with a senior software engineer. We have to be descriptive. We have to provide more guidance. Now with these system level 2 thinking models, we can adopt our approach, and working with them like working with principal engineers. We can be less descriptive, provide less guidance, and more open-ended, and letting the AI model infer our intent more autonomously, especially for use cases that require high reasoning capabilities. Along with AI model advancements, and that is constantly coming, there are many innovations about the approaches and frameworks for building GenAI applications. You see here RAG, AI Agent, Agentic RAG.

At the center of these approaches, it’s about prompt engineering and the management of these prompts. They are extremely critical to the importance and capability of GenAI applications, so they must be treated like code. Prompt versioning, testing, validation, they must be treated at the same level as production code. In summary, for the knowledge and expertise, for the motion, mastery of prompt engineering demands specialized skills, expertise comparable to learning a programming language. It requires understanding of a behavior, crafting precise instructions, and continuous learning about the ecosystems to optimize the outcome.

Knowledge and Expertise (Against the Motion)

Let’s hear from the other side to see what they say. Lower barrier to entry. For prompt engineering, the barriers to entry are far lower. Prompt crafting is more like trial and error than true expertise. It does not require years of formal education and technical training. We went to a CS degree in college for four years. In fact, it’s more intuitive, because people are already skilled at communicating in natural language from an early age. You see that prompt, there’s no need to set up an environment, compiler, installing the necessary dependencies.

Compared to Python, even a simple Hello World, you have to know about a syntax and how to properly use the print function, and library, and such. Any third grader will likely struggle with that. This, in programming, remember this? Mastery of this data structure and algorithm is fundamental to be a good software engineer. They must know how to choose the right data structure for the right problem, and they must learn not only what works, but also why it works and how to improve it. In contrast, prompt engineering operates at a much higher level, abstraction level.

There’s no requirement to understanding how the AI models store information, which makes it easier to learn and apply. Argument number two, lack of abstraction mastery. If you’ve been doing software design for 10-plus years, I’m sure many of you will remember these two paradigms. They offer numerous benefits in the way we design, evolve, and maintain software, and to master these paradigms require months and years learning and practicing to real-world projects. In contrast, prompt engineering requires less formal education. There is a lack of deep abstraction. Remember this, another form of abstraction in software design patterns.

Again, mastering these complex abstractions require very deep expertise and practices. In contrast, prompt engineering, while powerful, but relying on natural language command without requiring understanding of these core computational concepts. This contrast illustrates the lower expertise required in prompt engineering. In summary, for against the motion of knowledge and expertise, programming demands mastery over abstract concepts. All which are essential for building scalable, reliable, and efficient software systems. Prompt engineering, while powerful, lack the depth and complexity required in traditional programming.

Impact and Longevity (For the Motion)

Moving on to the last one, impact and longevity. Argument number one, for the motion, paradigm shift in human-computer interaction, indicating significant impact. You see this beautiful and complex image was generated from a human-computer interaction, in the form of a prompt, but not just any prompt. This prompting style for this type of interaction is complex, detailed, intricate, almost like in the form of DSL, like SQL or HTML. Furthermore, this prompting style employs a unique dialect or descriptive language to describe the artistic style, the lighting, the perspective, in order to bring out the visual elements, the mood, the composition and much more. If you have to reverse engineer the prompt to create this image, what do you think it will look like? Let me read you the prompt. “In the distance, a building with Japanese inspired architecture is perched on the lake. Golden hours illuminate blooming cherry blossom trees around a pond.

In the pond, a group of people enjoying the serenity of the sunset in a row boat. A woman underneath a cherry blossom tree is setting up a picnic on a yellow checkered blanket”. Try to write that prompt. Next one, meta-prompt, prompts for prompts. Have you ever wanted to ask an AI model to help solve a very challenging problem, and you ended up staring at a blank prompt for a while? There’s a technique for that, that’s called meta-prompting, which can be thought as like a higher-level abstraction of prompting, where the goal is to generate prompts that elicit specific responses from the AI model, rather than asking the model to perform the task. The benefit it provides, including like to speed up the process, to coming up with proactive prompts, and to reduce the cognitive load that you have to come up with a complex prompt.

Compared to programming, this is very similar to metaprogramming language like 4GL that you might have heard of or used in the past. Let’s do a quick demo here to see what that looks like. I call this, Prompt Buddy for Software Engineers. You can select a topic, let’s go with maybe team collaboration. We’re going to describe a challenge, and maybe it’s smooth handoff between development and QA. Has anyone encountered that? We’re going to click this button and it comes back with a prompt that we can now copy and paste into ChatGPT, or Claude, whatever, to get the actual response that we’re looking for. It’s really powerful for like, if in a company where you have sets of tasks that are common and you want to establish best practices when coming up with prompts.

Argument number two, democratically accessible development. There has been an explosion of AI coding assistants in the last six months, and I’m sure many of you are aware of this. The pace is mind blowing. There’s a belief in the next few years we will see a significant jump in developer productivity, similar to the introduction of high-level programming languages and syntax highlighting in the past. Programming still exists as a profession in the next five years, however, it’s going to change significantly. What’s really interesting is these AI coding assistants are improving themselves. They actually write code for the next release themselves. Very fascinating.

This is a post from someone that’s very well known. He’s a well-known AI researcher, one of the co-founders of OpenAI. This is a glimpse of what he said. “Over the last few days, most of my programming is now writing English, prompting and then reviewing and editing the generated diff. I still don’t think I got sufficiently used to all the features. It’s a bit like learning to code all over again, but I basically cannot imagine going back to unassisted coding at this point, which was only possible just about three years ago”. The last argument here, prompts are the interface for the future. There’s no doubt that software engineering is on the cusp of a profound transformation. It’s going to change the way we do software engineering, the way we conceptualize the very nature of building software and collaboration.

There are numerous discussions about the trajectory of where are we going, starting with coding assistant, that’s ever present, 24 by 7, as a team member that has full knowledge of your code base, capable of participating in discussions, answering questions, to AI agents that can take on more active roles in the development process, but still under the guidance of human engineers.

One day, we’re going to get to autonomous virtual employees that can take on full roles within the organization, comparable to software engineers. I don’t know when we’re going to get there, but I think that’s the path that we’re going to get there. In summary, for the impacts on longevity, for the motion, prompt engineering will have a lasting impact and evolve similarly to programming languages. As AI becomes integral across industries, it will become an indispensable skill. The move toward formalizing a discipline to best practices, research efforts, and educational resources supports its longevity.

Impact and Longevity (Against the Motion)

Let’s hear from the other side. Trends, not transformation. What’s the commonality between these technologies that’s on this slide? Do we still use any of these? At one point in the past, these technologies were quite popular. Some of them were hailed as the universal data format, universal protocol. Prompt engineering might seem revolutionary now, but it’s more akin to a trend. The techniques are likely to evolve or be subsumed in other interfaces, whereas programming languages have stood the test of time. Argument number two, AI will automate itself. This is an interesting one. As AI models become more advanced, the need for prompt engineering might diminish because the model becomes better at interpreting vague and ambiguous instructions.

Imagine a world where AI coding assistants notice a developer often uses a specific coding style and automatically apply it to new code without being prompted each time. This could render prompt engineering a temporary skill and not a long-term paradigm shift. The last argument, software development still needs programming. While prompt engineering is useful, but the core of software development still relies on traditional programming languages for important scenarios: performance, scalability, reliability. High performance applications need finely-tuned code that only skilled programmers can provide.

Complex system, like operating systems, still need traditional programming and cannot be built with prompts alone. They might augment development, but they don’t replace the fundamental need for coding. In summary, for against the emotion in this particular area, prompt engineering might serve as a useful supplement, but it does not replace the foundational need for coding expertise. Programming languages persist and evolve due to their ability to provide precision, control, and scalability for building complex systems.

Closing Arguments

There you have the arguments for both sides for those three areas. Now we’re going to come to the closing argument portion for both sides. For the motion, prompt engineering is the new programming language of the future. Like traditional programming, prompting will require careful structure, precision, and an understanding of the underlying system. The flexibility of natural language allows for unprecedented creativity and rapid prototyping, enable real-time interactivity that traditional coding simply cannot match.

In the world of speed, accessibility, and adaptability matters, prompt engineering has already revolutionized the way we work and innovate. It is the language for everyone, whether you’re a coder or not, making it the logical evolution of programming. The future of automation, it’s about what you want to accomplish, not how you code it. Vote for progress. Vote for prompt engineering as the new programing language. Remember that, all those key points?

Let’s hear for the other side. While prompt engineering is a powerful tool, calling it a new programming language is a stretch. Programming languages have strict syntax, defined rules, scalability that prompt engineering simply cannot offer. Prompt engineering lacks formal grammar. Its probabilistic outputs make it unreliable for mission critical and large-scale systems. Let’s not confuse accessibility with expertise. While prompts help automate tasks and engage creativity, there are no substitute for the depth and precision of traditional programming languages. The distinction matters. Vote for clarity. Vote against equating prompt engineering with a true programming language. Those are the closing arguments for both sides.

 

See more presentations with transcripts

 

Originally Appeared Here

You May Also Like

About the Author:

Early Bird