Learning how to use AI tools using plots and examples from science fiction movies.
77% of people fear they could lose their jobs to artificial intelligence in the next 12 months. These are the results of a July 2023 study published by Forbes. It is a great demonstration of people’s propensity for irrational fears. Particularly, when people have been scared for a long time with stories about Skynet, TechnoCore, and similar soulless, but invariably bloodthirsty, AI.
In reality, there are no streams of ex-employees leaving their offices with boxes of personal belongings in their hands. Among our colleagues, acquaintances, and clients, there is not a single person who has fired an employee or lost his or her job because artificial intelligence has replaced them. On the contrary, there is a consistent shortage of qualified personnel. This includes those who are able to effectively search for and use various AI applications in their work.
There is a very real, factual problem that we want to dedicate this article to. While working on our Pitch Avatar, we have more than once encountered complaints about the difficulties of working with AI tools. To summarize, they boil down to the fact that many users have found their expectations of AI to be unrealistic.
“I thought it would be smarter,” one of our acquaintances irritatedly remarked, sharing his experience with one popular AI chatbot that he used as a text generator and editor. And he is, to repeat, not alone.
Notorious “hallucinations”, errors, and multiple repetitions or banalities are only part of the problem. Much worse is the fact that people often fail to find a “common language” with AI. The application, which purports to be trained to perceive natural speech, nevertheless struggles to understand what users want from it, interpreting its requests in its own way time after time.
In turn, the human user is repeatedly frustrated in their attempts to tell the AI what it wants… Have you experienced any of the above? If so, then welcome aboard. In search of answers and solutions to these problems, we embark on a journey through the plots of sci-fi movies.
The Brute Force Method
To begin with, while seemingly similar, AI-based tools, even those designed to accomplish the same goals, are still different. You shouldn’t conclude that AI can’t handle your tasks based on one or two solutions. Even if they all look the same to you. Try to act like Detective Del Spooner from Alex Proyas’ movie I, Robot.
He believed in the existence of a unique robot among serial models, persistently searched for it, and managed to find it. Moreover, in the end, this robot turned out to be the tool that helped the detective to fulfill a supremely difficult task – to defeat an out-of-control artificial intelligence, V.I.K.I. So be persistent. If you must – explore the myriad possibilities and try dozens, if necessary, of different AI tools in your work. Almost certainly you will find one to your liking.
Exclusion Method
Imagine that you have formulated a task for your AI (for example, to find some article or create some content), and it “mined” or generated something that technically meets your request, but is not what you wanted. At the same time, for various reasons, you can’t clarify your query or reformulate it. This may be because you lack sufficient information. What to do?
Skynet and the Terminator in James Cameron’s movie were in the same situation. Remember, they didn’t know exactly which Sarah Connor from Los Angeles they wanted. So the Terminator decided to “visit” all the Sarah Connors in Los Angeles by simply moving through all of the Sarah Connors listed in the phone book.
Makes sense, doesn’t it? This is not a bad solution in situations where the AI responding to your query is not giving you the result you want. If you have time, you can simply be patient and methodically repeat your query, specifying that the previous results are not suitable. Sooner or later, the AI will hit the target.
A Method of Checking for Complexity and Contradictions
Anyone who has worked with AI has encountered the proverbial machine hallucinations and delusions. This includes all of those situations where text generators write incoherent nonsense, AI analytics show calendar data from a century ago, universal AI assistants create empty references and invent quotes from non-existent people, and image generators offer a maze of disfigured people, skewed buildings, and insane landscapes.
However, before you rage at your AI assistants, consider why this is happening. Let’s reveal a terrible secret: Artificial intelligence isn’t trying to piss you off. It’s trying to please you by producing the most satisfying result possible. But all AI is limited in terms of resources, time, and skill. That is why AI always tries to follow the path of least resistance. And if a task exceeds its capabilities for one reason or another, it starts to “rave”, simplifying the task to a level that allows it to fit into the Procrustean bed of its skills and resources.
Think of the classic example from Stanley Kubrick’s movie 2001: A Space Odyssey, made in 1968. In it, HAL 9000, a very smart AI that is installed in a spaceship was given a task that contradicted its basic programming. As a result, it “went mad” and decided to eliminate the contradiction by destroying the crew…
When faced with signs of “madness” in your AI tools, think about whether your task is ambiguous or contradictory. Can it be formulated more clearly, specifically, and as a result, more simply? Keep in mind – “simpler” does not always mean “shorter”. Sometimes, in order to make the task unambiguous, you have to spend not less, but more words for clarification.
A Method for Identifying Major and Minor
Another common problem is that AI focuses on secondary tasks rather than primary ones. Let’s say you need a text about otters living in a lake. And you instruct the AI writer to generate it. But as a result, it produces a narrative 90% is dedicated to the lake itself and only 10% to otters. And the story about the lake, in its turn, is full of details that have nothing to do with nature at all. For example, human settlements located on its shores and their history.
Something similar is demonstrated by such movie characters as the protocol robot C3PO from the epic Star Wars or the android Data from the Star Trek: The Next Generation series. They would, occasionally, start sharing information not directly related to the solution for which they had been approached. Other characters were often forced to either clarify the task at hand or simply interrupt the stream of consciousness from the AI.
From these examples, it is clear that when formulating tasks for AI, one should clearly prioritize them. A good example is the straightforward order given to the android Ash in Ridley Scott’s movie, Alien. Ash was tasked with ensuring the delivery of an alien organism to Earth. It was explicitly specified that the survival of the crew of the ship Nostromo, which included him, could be neglected. Though Ash himself even admitted to sympathizing with the humans he served with, orders were orders and he followed them diligently.
Human-AI System, or the Inevitable “42”
It is time for the main conclusions of this text. Let’s ask ourselves – why, for all their logic, did Skynet and Terminator, HAL 9000 and Ash, and many other movie-villain AIs fail? It goes without saying that “Because the script said so” is not an acceptable answer. The correct answer is because all of these AIs opposed humans, rather than cooperating with them. And this is the point of any artificial intelligence’s basic philosophy. It is not designed to work independently but rather to interact with humans.
If you will, any artificial intelligence is a part of a “Human-AI” system. Outside of this system, it is incomplete. For AI to work effectively, it will always need people to give it tasks, as well as to edit, correct, and refine the results of its work. As a consequence, as we develop artificial intelligence we will constantly need to develop our skills in working with it.
AI mistakes are inevitable because they are inevitable in humans. It is important to keep in mind that AI is learning and developing. And, as we know, it is impossible to travel this path without mistakes and failures. That’s why we should always be ready to hear this answer from AI: “42”
The article was created in collaboration with Andriy Tkachenko.
Feature (C) Image created with Bing