Check out all the on-demand sessions from the Intelligent Security Summit here.
Got It AI said it has developed AI to identify and address ChatGPT “hallucinations” for enterprise applications.
ChatGPT has taken the tech world by storm by showing the capabilities of generative AI, which can enable ordinary folks to prompt AI to generate a wide variety of things, from computer programs to original songs.
Some of those creations are remarkable. But the bad thing about ChatGPT is its error rate. Peter Relan, cofounder of the conversational AI startup Got It AI, said in an interview with VentureBeat that chatbots for conversational AI on enterprise knowledge bases cannot afford to be wrong 15% to 20% of the time. I found the error rate pretty easily by doing some simple prompts with ChatGPT.
Relan calls ChatGPT’s wrong answers “hallucinations.” So his own company came up with the “truth checker” to identify when ChatGPT is “hallucinating” (generating fabricated answers) in relation to answering questions from a large set of articles, or content in a knowledge base.
Event
Intelligent Security Summit On-Demand
Learn the critical role of AI & ML in cybersecurity and industry specific case studies. Watch on-demand sessions today.
Watch Here
He said this innovation makes it possible to deploy ChatGPT-like experiences without the risk of providing factually incorrect responses to users or employees. Enterprises can use the combination of ChatGPT and the truth checker to confidently deploy conversational AIs that leverage extensive knowledge bases such as those used in customer support or for internal knowledge base queries, he said.
It’s easy to catch errors in ChatGPT.
The autonomous truth checking AI, provided with a target domain of content (e.g. a large knowledge base or a collection of articles) uses an advanced Large Language Model (LLM) based AI system to train itself autonomously with no human intervention specifically for one task: truth checking.
ChatGPT, provided with content from the same domain of content, can then be used to answer questions in a multi-turn chat dialog, and each response is evaluated for being true before being presented to the user. Whenever a hallucination is detected, the response is not presented to the user; instead a reference to relevant articles which contain the answer is provided, Relan said.
“We tested our technology with a dataset of 1,000-plus articles across multiple different knowledge bases using multi-turn conversations with complex linguistic structures such as co-reference, context and topic switches”, said Chandra Khatri, former Alexa Prize team leader and co-founder of Got It AI, in a statement. “ChatGPT LLM produced incorrect responses for about 20% of the queries. The autonomous truth checking AI was able to detect 90% of the inaccurate responses. We also provided the customer with a simple user interface to the truth checking AI, to further optimize it to identify the remaining inaccuracies and eliminate virtually all inaccurate responses.”
I suppose that means you might need more than one truth checker.
“While we fully expect OpenAI, over time, to address the hallucination problem in its base ChatGPT LLM models for “open domain” conversations about any topic on the internet, our technology is a major breakthrough in autonomous conversational AI for “known” domains of content, such as enterprise knowledge bases,” said Amol Kelkar, cofounder of Got It AI, in a statement. “This is not about prompt engineering, fine tuning or just a UI layer. It is an LLM based AI system that enables us to deliver scalable, accurate and fluid conversational AI for customers planning to leverage ChatGPT quickly. Truth checking the generated responses, cost-effectively, is the key capability that closes the gap between an R&D system and an enterprise ready system.”
“There’s a whole repository of all the known mistakes,” Relan said. “Very roughly speaking, the word is it is up to 20%. It’s hallucinating and making up stuff.”
He noted that ChatGPT is an open domain, where you can talk to it about anything. From Julius Caesar to a math problem to gaming. It’s absorbed the internet, but only up to 2021. Got It AI doesn’t try to doublecheck all that. But it can target a limited set of content like an enterprise knowledge base.
“So we reduce the scope and size of the problem,” Relan said. “That’s the first thing. Now we have a domain that we understand. Second is to build an AI. That is not ChatGPT based.”
ChatGPT isn’t all that smart.
That can be used to evaluate if ChatGPT’s answers are wrong or not. And that’s what Got It AI can do.
“We’re not claiming to catch hallucinations for the internet, like everything on the internet that could possibly be” fact checked, he said.
With Got It AI, the chatbot’s answers are first screened by AI.
“We detect that this is a hallucination. And we simply give you an answer,” said Relan. “We believe we can get 90%-plus reduction in the hallucination right out of the box and deliver it.”
Others are trying to fix the accuracy problems too. But Relan said it isn’t easy to get high accuracy numbers, given the scope of the problem. And he said, “We’ll give you a nice user interface so you can check the answer, instead of giving you a bunch of search results.”
Product and private beta
Back in 2017, Peter Relan said that the big search, social network, and e-commerce companies are late in grafting AI to their businesses.
Got It AI’s truth checking AI is being made available via its Autonomous Articlebot product, which leverages the same OpenAI generative LLMs used by chatGPT. Got It AI’s Articlebot, when pointed at a knowledge base or a set of articles, requires no configuration to train itself on the target content and users can start testing it within minutes of signing up for contextual, multi-turn, enterprise-grade conversational AI customer support, help desk and agent assist applications.
Got It AI is accepting inquiries into its closed beta at www.got-it.ai.
Relan is a well-known entrepreneur whose YouWeb incubator helped spawn startups such as mobile gaming companies OpenFeint and CrowdStar. He also helped Discord, the popular game chat platform, get off the ground.
Got It AI spun out of another startup that Relan had been incubating for about five years, now the new startup got unveiled in the summer. Got It AI has about 40 people, and it has raised about $15 million to date, partly from Relan’s own venture fund.
GamesBeat’s creed when covering the game industry is “where passion meets business.” What does this mean? We want to tell you how the news matters to you — not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings.