AI Made Friendly HERE

AI May Kill the Job Everyone Thought It Would Create

  • Prompt engineering looked like the hottest job in tech amid the generative-AI boom.
  • People in that job write text that can produce optimal results from tools such as ChatGPT.
  • But researchers are finding that AI can be trained to do that job, too.

Thanks for signing up!

Access your favorite topics in a personalized feed while you’re on the go.

download the app

Could prompt engineering be on the list of jobs that AI will kill next?

Prompt engineers write input data, often a block of text, that can produce a desired result from generative-AI tools such as ChatGPT. And for a brief moment, it looked like the next trendy tech job amid the boom of artificial-intelligence chatbots.

Some companies were offering six-figure salaries for the job, sparking concerns that it would even replace the coveted software-engineer role.

But it turns out, AI might be able to handle prompt engineering, too.

Researchers at VMware, a Palo Alto, California, cloud-computing company, found that large language models were more than capable of writing and “optimizing their own prompts.”

In their paper, “The Unreasonable Effectiveness of Eccentric Automatic Prompts,” Rick Battle and Teja Gollapudi set out to quantify the impacts of “positive thinking” prompts, which are almost exactly what they sound like.

Experience has shown, the researchers wrote, that prompts written with positivity or optimism can sometimes yield better-quality results out of generative-AI tools. For example, instead of simply writing a command for the LLM, a positive-thinking prompt could include messages such as, “This will be fun,” or, “Take a deep breath and think carefully.”

However, the researchers found that what’s more effective and less time-consuming was simply asking an LLM to optimize the prompts itself, which the study referred to as “automatically generated prompts.”

“Improving performance, when tuning the prompt by hand, is laborious and computationally prohibitive when using scientific processes to evaluate every change,” the researchers wrote, adding: “It’s undeniable that the automatically generated prompts perform better and generalize better than hand-tuned ‘positive thinking’ prompts.”

Battle and Gollapudi did not immediately respond to a request for comment.

The paper also pointed to another study, led by a Google DeepMind researcher, Chengrun Yang, who similarly found that an LLM could “outperform human-designed prompts.”

VMware researchers even found that LLMs could be quite creative in producing the best prompts.

One example provided in the study was text written by a machine-learning model that sounded like something out of a “Star Trek” episode.

“Command, we need you to plot a course through this turbulence and locate the source of the anomaly. Use all available data and your expertise to guide us through this challenging situation,” the prompt said, according to the study.

The text was the “highest-scoring optimized prompt” generated by one of the LLMs used in the study.

“They diverge significantly from any prompts we might have devised independently,” the researchers wrote of the prompt. “If presented with these optimized prompts before observing their performance scores, one might have anticipated their inadequacy rather than their consistent outperformance of hand-tailored prompts.”

In some ways, tools including ChatGPT already automatically change a user’s prompt to produce what they believe is the best output data.

On a recent episode of The New York Times’ tech podcast “Hard Fork,” the tech journalist Casey Newton talked about how ChatGPT transforms a user’s prompt in the background as it churns out a result. Users then have the ability to see how the LLM reinterpreted their prompt.

“It’s a really interesting product question because speaking on the ChatGPT site, I can tell you that thing is much better at writing prompts than I am,” Newtown said. “To me, this totally blew away the concept of prompt engineers, which we’ve talked about on the show.”

Though research has found promising performances for prompt optimizers, some experts say it won’t immediately kill off prompt-engineering jobs.

Tim Cramer, the senior vice president of software engineering at Red Hat, which makes open-source software, told IEEE Spectrum magazine that the generative-AI industry was constantly evolving and would continue to need humans involved in the process.

“I don’t know if we’re going to combine it with another sort of job category or job role,” Cramer told the magazine. “But I don’t think that these things are going to be going away anytime soon. And the landscape is just too crazy right now. Everything’s changing so much. We’re not going to figure it all out in a few months.”

Originally Appeared Here

You May Also Like

About the Author:

Early Bird