With the surge of AI chatbots like ChatGPT and Microsoft’s Bing AI, companies are looking to keep their AI models up to date, ensuring they aren’t spitting out hallucinations, misinformation, and even creepy threats — all of which they’re prone to do in their current state.
The trend has even given birth to a new profession, “prompt engineering,” which involves simply speaking to these chatbots in plain text to refine their ability to give relevant and trustworthy answers.
“The hottest new programming language is English,” Tesla’s former chief of AI Andrej Karpathy tweeted last month.
But does prompt engineering really amount to a reliable, long-term career choice, or is it the product of a fad that will be completely forgotten about in a matter of years? Experts are divided on the topic, as The Washington Post reported last week.
It’s an open secret that chatbots like OpenAI’s ChatGPT and Microsoft’s Bing AI are prone to make stuff up. They hallucinate, fictionalize, gaslight, and even make unprovoked accusations.
That’s where prompt engineers come in. They essentially try to identify problems and get the chatbot back on the rails — a fascinating new way of looking at human-computer interactions.
“I’ve been a software engineer for 20 years, and it’s always been the same: You write code, and the computer does exactly what you tell it to do,” British programmer Simon Willison, who has studied prompt engineering, told the WaPo. “With prompting, you get none of that. The people who built the language models can’t even tell you what it’s going to do.”
Karpathy described a prompt engineer as a kind of “large language model (LLM) psychologist” that can coax out the true capabilities of a given AI.
Companies behind these AI chatbots are placing a lot of value on the skill.
“Writing a really great prompt for a chatbot persona is an amazingly high-leverage skill and an early example of programming in a little bit of natural language,” OpenAI CEO Sam Altman tweeted last month.
Not all experts are convinced that prompt engineering will be effective, though, considering the sheer unpredictability of what an AI chatbot will say.
“It’s not a science,” Shane Steinert-Threlkeld, an assistant professor in linguistics at the University of Washington, told the WaPo. “It’s ‘let’s poke the bear in different ways and see how it roars back.'”
As such, s0me say it’s just a fad that’ll soon fade away.
“I have a strong suspicion that ‘prompt engineering; is not going to be a big deal in the long-term and prompt engineer is not the job of the future,” Wharton University entrepreneurship and innovation professor Ethan Mollick tweeted.
But that kind of pessimism isn’t stopping startups from capitalizing on the idea. As Insider reports, companies are already hiring employees tasked with prompting and fine-tuning LLMs. Some companies are going as far as to buy prompts.
We’ve only begun to scratch the surface of our obsession with AI. As the industry grows at a breakneck pace, critics of the technology are starting to worry we could be looking at an “AI bubble” that’s doomed to burst — which could end up wiping out the prompt engineers it spawned along with it.
READ MORE: ‘Prompt engineering’ is one of the hottest jobs in generative AI. Here’s how it works. [Insider]
More on AI chatbots: Mark Zuckerberg Bows to Peer Pressure, Announces Pivot to AI