In those early days, I used a protocol called Gopher to access remote content. Servers called Gopherholes provided rudimentary search functionality by combining various online resources into a unified system that users could navigate in order to find what they were looking for.
There was also a primitive search engine called Veronica that allowed us to search for information across multiple Gopherholes. While all of this was better than memorizing a string of URLs, it was still very hard to find what you needed.
The World Wide Web gave rise to a whole new breed of search engines. Websites like AltaVista, AskJeeves, and Yahoo tried to create a better search experience by comprehensively indexing as many websites as they could, using techniques that librarians have used for decades to organize books.
This approach, however was also largely ineffective at finding relevant information—particularly given the massive amount of content being uploaded onto the internet.
And then came Google’s PageRank algorithm that categorized individual webpages based on the number of links it received from other websites. This allowed us to algorithmically determine the ‘importance’ of a page, thus offering us a far more effective way to find exactly what we were looking for.
Almost before we knew it, this became the default way in which we searched online to the point where present generations of users know no other way to access content. So accurate has search become that in almost every instance, the first search result is exactly what we’re looking for.
This gave birth to a whole new industry, search engine optimization (SEO), focused around designing websites to consistently feature high up on search results. These organizations are masters at gaming the search algorithm, as they know exactly what it takes to ascend the rankings on all notable search engines.
For their part, search engines are constantly tweaking their algorithms to scupper these attempts at optimization to ensure that the integrity of search is not compromised.
If you had asked me five years ago, I would have told you that Google’s dominance of online search simply could not be disrupted. It was so far ahead of every other search engine that competition authorities around the world had begun to initiate investigations into what impact this dominance was having on other areas of operation.
But after ChatGPT was launched, I began to notice a change in my own habits. I found that I was turning to artificial intelligence (AI) for answers to more and more of my day-to-day queries—to the point where it had become my primary port of call for the questions that, even a year ago, I would have turned to a search engine for.
While search engines point us to websites that contain answers to our questions, AI extracts the actual answers for us from within those webpages—gathering, where necessary, the information it needs to assemble an answer from across a number of different websites. The better it has got at doing this, the more I have found myself turning to AI for everything I once looked to search engines to provide.
This has given rise to a whole new optimization industry, one that is aiming to do for AI what SEO did for search. A number of companies have sprung up around the business model of helping brands assess what AI chatbots think of them in order to help them figure out how they are coming across in answers to chatbot queries.
But it is possible to take things even further.
In a recent paper, Aounon Kumar and Himabindu Lakkaraju demonstrated that all it takes to change the way that search tools based on large language models (LLMs) think about a given product is the introduction of strategic text sequences in the company’s website.
The researchers conducted an experiment using different brands of coffee machines and were able to show that by simply inserting an optimized sequence of tokens into the product page of one of those brands, it was possible to dramatically change the way in which LLMs ranked that brand against all others.
These findings have implications that extend beyond e-commerce. With LLMs becoming increasingly central to our online experience the same way that search has been up until now, it may be possible to manipulate AI by tricking chatbots into presenting a particular brand or personality in one way or another. This raises a number of ethical questions about the unfair advantage it will grant those who can afford to do this.
We are going to see a whole new tug-of-war between LLM companies on one hand and those at the forefront of this new AI-optimization industry on the other.
AI companies will have to double down on preserving the integrity of their models, ensuring that the answers that they provide are not being warped by these attempts at subverting them.
For their part, optimization companies will have no choice but to keep finding new ways to stay one step ahead of whatever new measures AI companies throw at them.
It is we, caught in the crossfire of this battle, who will be unsure of whether the answers we are being given by AI chatbots are accurate—or just something someone wants us to believe.