Ever since its genesis in 1988, the Wolfram language has been the go-to language to solve complex scientific problems. Wolfram Alpha, an answering machine built on this language, uses natural language processing, the world’s largest repository of computable knowledge, and a custom-made symbolic programming language to provide answers to mathematical questions.
These questions can range from high-level calculus to the amount of calories in a given dish, and Wolfram Alpha will provide an answer while showing the steps to the solution. For all its computational intelligence, the website sometimes struggles with identifying queries in natural language. With modern advances in natural language processing and chatbots, this forerunner of modern AI might change the landscape for NLP-powered problem solving.
Bringing together Wolfram Alpha and ChatGPT
In a blog post published recently, Stephen Wolfram, the founder and CEO of Wolfram Research, explored the idea of combining the capabilities of Wolfram Alpha and ChatGPT. Wolfram demonstrated the tendency of ChatGPT to give factually incorrect answers which sounded like they might be accurate, highlighting the feature of the chatbot to correct itself when prompted. He also showed off the capabilities of Wolfram Alpha and how it can be used to ‘inject’ data points into ChatGPT, which the bot then accepted as the correct response.
To understand why these two vastly-different applications can work so well together, we must first delve into the approach that they each take to solving a problem. ChatGPT is trained on GPT 3.5, a large language model that has a dataset containing 175 billion parameters, using which it has learned to respond to prompts in natural language with coherent responses. This means that the chatbot has learned the pattern of human-like speech along with the capability of translating a query in a human language to one in a machine-understandable language.
On the other hand, Wolfram Alpha is built on the Wolfram language, a symbolic programming language that is focused on expressing complex ideas in a computational form. This language was made expressly to solve complex algebraic problems, with its latest iteration being able to take on higher-level calculus tasks like differential equations and matrix manipulation.
When looking at the approach that the creators of Wolfram and ChatGPT have taken, the benefits of bringing them together are obvious. ChatGPT is unbeatable at parsing natural language and making it computer-readable while Wolfram is excellent at solving complex mathematical problems by breaking it down into the Wolfram language’s symbolic expressions. This union might even make it the accurate chatbot that the scientific community doesn’t know they need.
The chatbot scientists need
The scientific community has largely ignored chatbots derived from large language models, as seen by their negative response to Meta’s ‘Galactica’ model. This short-lived LLM was launched with the grand goal of organising all scientific knowledge and making it accessible through a chatbot. Unfortunately, it had a propensity to hallucinate information, even though it was trained on close to 50 million scientific papers, leading to it being shut down in a matter of two days.
While the bug of information hallucination is yet to be solved even for ChatGPT and the underlying GPT LLM, OpenAI has conducted research on reducing the amount of misinformation that the bot gives out. Owing to this, the bot rejects queries that it believes it does not have information to answer along with blocking answers about sensitive topics, like hate speech and self harm.
One place where this system falls apart is when ChatGPT is asked objectively factual questions, where it confidently spews misinformed answers. There are two reasons for this, first being ChatGPT’s dataset. The dataset for GPT 3.5 consists of information scraped from the Internet, leading to many discrepancies when it comes to specific information like the distance between cities, population statistics, and many more.
In addition to a flawed dataset, the bot also faces difficulty in mathematical calculations, as it is not trained to understand them but simply try to ‘solve’ them using natural language. Both these shortcomings can be addressed through Wolfram Alpha. If combined with the Wolfram language’s computational prowess, the problem of not understanding mathematical problems can be solved. Add on Wolfram’s comprehensive knowledge-base and you have a search killer on your hands. In addition to being objectively accurate, it can also understand exactly what is being asked, covering up the shortcomings of both ChatGPT and Wolfram Alpha. Stephen Wolfram, the creator of the language, said:
“There are all sorts of exciting possibilities, suddenly opened up by the unexpected success of ChatGPT. But for now there’s the immediate opportunity of giving ChatGPT computational knowledge superpowers through Wolfram|Alpha. So it can not just produce “plausible human-like output”, but output that leverages the whole tower of computation and knowledge that’s encapsulated in Wolfram|Alpha and the Wolfram Language.”
This miracle combination might be the chatbot that the scientific community needs. With ChatGPT’s measures against giving misinformed responses and Wolfram’s mathematical might, we might see a chatbot that provides actual accurate information. The Wolfram language is also a mainstay in the scientific community through ‘Mathematica’, further proving the point that these kinds of solutions do have a market. Instead of learning complex software like Mathematica, scientists can ask queries in natural language and get an accurate answer that they can trust.
The future of access to Information will be decided by the success of chatbots like ChatGPT, but the bot we’re seeing now is just the first step. There are still decades of iterative improvement waiting in the wings and AI researchers stand to benefit from integrating existing solutions into their cutting-edge algorithms to set the paradigm for the next generation of problem-solving algorithms.