Google Bard may be competing with ChatGPT, but it’s also competing with Google Search. Plenty of people are turning to AI chatbots over traditional search engines, which is why Bing Chat and Google Search Generative Experience both exist as AI tools that blend generative AI with traditional search.
But unfortunately, generative AI gets things wrong — a lot. So Google is reminding people that when in doubt, you should still Google it rather than Bard it.
In an interview with the BBC, Google UK executive Debbie Weinstein says you shouldn’t necessarily trust what Bard puts out, and she has a really good reason why. Weinstein says that Bard simply isn’t designed to be a tool to “search for specific information.”
“We’re [Google] encouraging people to actually use Google as the search engine to actually reference information they found.”
Google UK executive Debbie Weinstein
Instead, Weinstein says users should leverage Bard’s generative AI capabilities for “collaboration” and “problem-solving” and that “We’re [Google] encouraging people to actually use Google as the search engine to actually reference information they found.”
If only Google had followed its own advice when it launched Google Bard. In Bard’s initial launch demo, the AI chatbot got some facts wrong about the James Webb Space Telescope and it sent the company’s stock price plummeting by over $100 billion in market capitalization.
So next time you need to research for a school project or trivia night, leave Bard alone and head to Google Search or another traditional search engine. But if you need to compile your research into a digestible summary, then you can head over to Bard and let AI give you a hand.
Surprisingly honest about generative AI
(Image credit: Shutterstock/Rokas Tenys)
To be fair, aside from that initial snafu, Google has been relatively consistent on the point that AI shouldn’t be trusted. Weinstein describes Bard as an “experiment” which Google Bard also mentions on the Bard homepage. In fact, Bard explicitly says up front that “Bard is an experiment and may give inaccurate or inappropriate responses.”
However, people are still getting the hang of AI and not everyone is aware yet that it frequently makes mistakes. And some of Google Bard’s competitors aren’t quite as upfront about their AI’s shortcomings. Bing Chat does give some caveats, saying “surprises and mistakes are possible” and that you should “Make sure to check the facts” but it doesn’t feel like quite as blunt a disclaimer as Google Bard’s.
However, both Google and Microsoft are brutally honest in comparison to OpenAI. If you just start using ChatGPT, there’s no disclaimer that pops up. Just a line of fine print at the bottom of the user interface that says “ChatGPT may produce inaccurate information about people, places, or facts.” Very subtle in comparison to the other two chatbots.
So when in doubt, follow Weinstein’s advice and don’t trust AI chatbots like Bard, Bing Chat or ChatGPT when you just need a simple question answered. Instead, turn to them as a collaborative tool to make your life easier. And if you need more advice on how to use ChatGPT and other chatbots, check out these seven tips to get the most out of generative AI chatbots.
More from Tom’s Guide
Today’s best Google Pixel Buds Pro deals