AI Made Friendly HERE

The future of AI and prompt engineering

GUEST OPINION: Over the last year artificial intelligence and machine learning have entered mainstream conversation. No longer is discussion around it restricted to AI industry experts, it’s being talked about and used in innovative ways, by organisations and people alike.

Generative AI, particularly ChatGPT, has been widely adopted across businesses to help people plan and produce written documents, presentations, and emails. While generative AI isn’t a new tool, its entry into everyday life means it’s often touted as the next technology to revolutionize humanity’s ways of living and working.

But like any tool, AI needs to be used correctly to be effective. In the case of ChatGPT, that means knowing how to phrase prompts and queries to get the most effective and contextually accurate response from the AI.

This has led to the rise of the prompt engineer – something the World Economic Forum has described as one of the AI jobs of the future. The role of a prompt engineer is to ensure that generative AI is provided the most effective prompt to accurately and repeatedly achieve a specific outcome. Prompt engineers leverage generative AI’s ability to use in-context learning i.e. the information from recent prompts provided to it. Ultimately, the better the prompts, the better the outputs.

According to industry reports, the average salary for a mid-journey prompt engineer in the US is from $120,000, all the way up to $350,000 a year. With prompt engineers commanding that kind of salary, it might seem like a lucrative career in AI, an industry which will grow rapidly in the coming years.

But it might not be a long-term career option. If we need to rely on highly skilled employees who attract very generous renumeration packages to extract value out of generative AI, many businesses may look to cheaper and more effective AI solutions that better suit their individual use cases. For example, in cyber security – where accuracy of output is essential and the workforce is already scant – generative AI has its limitations in its ability to provide investigative support. In this instance, cyber security is better served by other AI models which utilise statistical probability approaches, like recursive Bayesian estimation, to estimate the likelihood of something being malicious or unwanted.

At its core, AI should be embraced as a tool to uplift humans by automating functions to benefit human operators – it should not take their job away, but make them better, tackling challenges that would be otherwise disproportionately hard to do without it. At Darktrace, we have always developed AI with this goal in mind: to augment human productivity by doing the heavy lifting so that humans can be freed from tasks like trawling through data logs and triaging cyber incidents which grow in frequency and complexity by the day.

Where prompt engineers are experts in writing in a way the AI can understand and action, we built our Cyber AI Analyst to do the opposite: we trained it on the work of human cyber security analysts, so that now it takes the AI’s understanding and makes that understandable and actionable by humans. This is the direction we think AI should be moving in; having AI and humans dividing up parts of the job where each excels, rather than AI trying to replace whole jobs which, in lots of instances, demonstrates a misunderstanding about how AI fundamentally works.

There will always be a trust journey that needs to be navigated with the introduction of these new technologies. We found this when we brought Autonomous Response AI to the cyber security market: human teams adopting this technology in the beginning often opted for “Human Confirmation Mode”, enabling them to confirm the actions that AI wishes to take in response to an emerging threat before enabling the AI to be fully autonomous. But over time, these users have come to develop trust in the AI’s decisions and switched to “Autonomous Mode”, letting AI take action to contain threats even when humans aren’t around.

Similarly, we might expect the prompt engineering hype to be a short-lived phenomenon. While the human element remains an intrinsic part of generative AI, as it is powered by Reinforcement Learning with Human Feedback (RLHF) and needs a constant stream of human creativity to keep itself from becoming an echo chamber of old ideas, we might see the tables turn as generative AI self-improves and takes what it has learned from prompt engineering and other human guidance, and uses that to direct itself or to interview its human counterpart instead.

This will be complicated by factors such as the potential unreliability of Large Language Models and the problem of hallucinations, where LLMs claim something as fact when the information it offers is wrong or made up, the limitations of static training data, and the quality of ‘temperature’ which controls how much outputs can vary from the same prompt.

These are factors that will need to be navigated as the generative AI boom continues, but ultimately, the promise of AI is to make things easier and more accessible – not to create another white collar job that is slightly different to previous ones. If the methods underpinning generative AI do not evolve and it is not optimised so that it augments human creativity, requiring end users to correct its mistakes, we may see a decline in its use causing the generative AI hype bubble to burst.

Originally Appeared Here

You May Also Like

About the Author:

Early Bird