AI Made Friendly HERE

Google Bard – Review 2023

Table of ContentsWhat Is Google Bard?PaLM2 Is a Work in ProgressHow Do You Access Google Bard?How Much Does Google Bard Cost?Reviewing and Deleting Past Conversations in Google BardExporting to Gmail and Google Apps With BardCreating Tables and Compiling Data With Google BardBard’s Practical, Straightforward Writing StyleDeep Dive: Bard, Bing Chat, and ChatGPT Write the Same EssayBard Cites Its Sources, But They’re Questionable at BestBard Is More of an Aggregator Than a Creative ThinkerCoding With Google BardResearching, Shopping, and Travel Planning With BardIs Bard the Future of Google Search?Bard Is an Evolving, Promising Chatbot

Although Google calls Bard an “experiment” that’s still in beta, it’s a fully fledged generative artificial intelligence (AI) chatbot with helpful features that make it a compelling ChatGPT alternative. Bard has a simple interface, which is dominated by a chat function that instantly returns concise, human-like answers to complex questions. It writes with more straightforward and simplistic language than ChatGPT, which feels consistent with its overall focus on productivity.

While Google’s AI model still needs work, in the long run, Bard is well positioned to become the go-to assistant for anyone who uses Google apps thanks to the option to export its responses to Gmail, Docs, Sheets, and Colab. It’s also better for research, shopping, and travel planning than ChatGPT, as it has access to timely information from the web. Bard makes an effort to cite its sources, though without enough consistency to earn it an Editors’ Choice award. Still, Bard offers the most well-rounded capabilities of any generative AI tool we’ve tested.

What Is Google Bard?

Bard is Google’s generative AI tool, which the company released in February 2023, just months after ChatGPT’s November 2022 release. Google executives issued an “internal code red,” speeding up the release, over the competitive threat of ChatGPT’s new way to search information—Google’s bread and butter.

Bard is a large language model (LLM), just like ChatGPT and the AI now included in Microsoft Bing Chat. If you ask Bard a question (“What is the most popular Thanksgiving dish?”) or any other request (“Make me a unique stuffing recipe, incorporating chestnuts”), it combs through a massive body of data to instantly return an answer with complete sentences that mimic human language. 

The AI model behind Bard is called PaLM2, which stands for Pathways Language Model, version 2. It uses trillions of tokens, or strings of words, to predict the next word in a sentence and mimic human speech. This gives Bard, ChatGPT, Bing Chat, and other similar LLMs the secret sauce that makes them so convincing, as well as dangerous when it comes to accidentally slotting falsehoods and half-truths into a polished sentence.

PaLM2 Is a Work in Progress

PaLM2 is more powerful than Google’s previous version of it, but it’s still likely less powerful than GPT-4 and GPT-3.5, which power ChatGPT. OpenAI, which owns ChatGPT, and Google are both secretive about their models, so the public knows little about them. However, an ongoing effort by the University of California, Berkeley aims to better understand this technology and has polled more than 40,000 people on which model’s output they prefer in a blind test. Through this method, Bard’s PaLM2 is ranked sixth as of this writing, well below ChatGPT, which holds first and second place for GPT-4 and GPT-3.5, respectively.

Google maintains that Bard is an “experiment.” A tiny disclaimer below the chat function reads, “Bard may display inaccurate or offensive information that doesn’t represent Google’s views.” However, the company is continually investing in it, most recently giving it the ability to execute code on its own to improve its logic and reasoning. At this point, the “experiment” label mostly serves as a general caveat for Google not to be held liable whenever Bard returns something nonsensical, or worse, unethical.

How Do You Access Google Bard?

You access Google Bard at bard.google.com. It requires a Google account (either personal or a Workspace account). This website is distinct from the core google.com search page, so you have to remember to pull it up on its own. Bard does not have a mobile app at this time, although you can access it on a mobile browser. 

A chat function dominates the simple interface. Once you start entering prompts, the screen fills with Bard’s responses, which are sometimes quite lengthy. You can ask follow-up questions. On the top-right side, a limited menu offers Use Dark Theme (pictured below), Help, and FAQ options, as well as chat-specific features like Reset Chat and See History.

Since Bard is available on Chrome, it’s a good choice for a default chatbot if you already use that browser. Microsoft Bing’s new AI search, on the other hand, can only be used in Microsoft Edge, which all but rules it out for Chrome aficionados. You can, however, use Bard on Edge if you so choose.

Bard is available in 180 countries, compared with 195 for ChatGPT. Notably, it does not work in Canada, like ChatGPT does, due to regulatory concerns.

How Much Does Google Bard Cost?

Bard is free, although it depends how you define “free.” It requires no payment to use. Any time you use it, your input trains Google’s AI model to compete with ChatGPT, Bing Chat, and other LLMs.

Bing Chat and ChatGPT are also free, though ChatGPT offers a more advanced plan for $20 per month.

Reviewing and Deleting Past Conversations in Google Bard

Google offers several settings to limit, though not completely prevent, it from storing data from your conversations. A toggle turns off chat history, which means Google will no longer “store your activity…such as the prompts you submit, the responses you receive, and the feedback you provide,” according to the Bard help page. However, even if storing chat history is off, Google says it will save your conversations for up to 72 hours, as “This lets Google provide the service and process any feedback.”

Google also says there are some conversations you cannot prevent it from storing and using to improve the product. In particular, those that are combed through by “human reviewers,” or AI trainers who are staffed to help monitor and refine the model’s output. Those conversations are not deleted when you delete your Bard activity, as they are “kept separately” and “retained for up to three years,” Google says.

The toggle also won’t delete data from previous exchanges, which you need to delete one by one on the chat history page. ChatGPT offers a similar ability to review prior conversations and delete them, although it is far superior to Bard’s because you can see the full conversation you had with ChatGPT on any given date and pick it back up.

Exporting to Gmail and Google Apps With Bard

My favorite feature of Bard is its ability to export to Gmail, Docs, Sheets, and Colab (a tool for running Python code). The best way to use chatbots, as I’ve written about before, is to take what they give you as a starting point to build from. Being able to directly pull the AI content into tools where you can keep working is spot-on. For example, if you ask Bard to draft an email, you can easily port it into Gmail, make some adjustments, and hit send.

In contrast, ChatGPT exists on an island with no supporting ecosystem of established products. It only offers a one-click copy, which Bard and Bing Chat also offer, so you can then paste the content wherever you like. Bing Chat’s unique export options include export to Microsoft Word, create a PDF (I do not recommend codifying a chatbot’s output into a static PDF before fact-checking), or generate a text file. 

Creating Tables and Compiling Data With Google Bard

I find the ability to export to Sheets particularly useful for creating tables, as data collection is one area where chatbots shine over a link-based Google search. In the table below, I asked Bard to generate a list of all the electric vehicles on the market today, the same question I asked ChatGPT and Bing Chat. Like its competitors, Bard’s list was incomplete and the starting prices were wrong.

But unlike its competitors, it went the extra step of adding columns with valuable information like manufacturer, body style, and range. This is a welcome addition for me to export to Sheets, fact-check, and add more to from my own research (using the main Google search).

The ability to have a running conversation with Bard helps hone the table you’d like to build before exporting it. Keep in mind, Bard cannot update an existing spreadsheet, which is another reason it’s more of a “helpful starting point” than an ongoing personal assistant.

After seeing its first list of EVs, I asked it to add a column with manufacturing location, which it immediately returned. The ability to instantly modify a table until I have the perfect draft to finish on my own with fact-checking and editing saves hours of work spent sifting through links on my own, copying down data points, and formatting the table.

Although, again, fact-checking is essential. The Kia Niro EV and Kia EV6 are manufactured in South Korea, not Georgia, as Bard indicated. This is a significant detail, as it is the main reason Kia’s EVs do not qualify for the $7,500 federal tax credit.

Bard’s Practical, Straightforward Writing Style

As an LLM, Bard’s main goal is to serve up helpful information through convincingly human-like sentences. Its writing style is different from ChatGPT’s verbose, intellectual sentences. Instead, it often writes in shorter prose that gets to the point quickly. 

To quantify its writing skills, I asked Bard, ChatGPT, and Bing Chat the same 20 questions and entered them into a calculator for the Flesch-Kincaid system to determine the grade level and readability. Bard scored the lowest of the three on grade level, coming in at 9th grade on average, compared with 11th grade for Bing Chat and 13th for ChatGPT. But it scored the highest on readability (59), compared with 49 for Bing Chat and 40 for ChatGPT. Its simpler sentences are easier to read. 

While that sounds appealing, recent studies suggest you may not prefer it. “Research has shown that humans tend to prefer friendlier chatbots even when they tend to hallucinate over something that is more precise but succinct,” says Nazneen Rajani, research lead at AI company Hugging Face. In an AI context, the term “hallucinate” means to make up facts, or give false reasoning.

Even if Bard is more to the point, it can seem less “smart” and more “computer-like,” causing me to feel slightly less comfortable with it than ChatGPT on a first read even if the content is quite similar in the end. But perhaps that’s a healthy reminder that I cannot trust it without verifying first.

Deep Dive: Bard, Bing Chat, and ChatGPT Write the Same Essay

I asked Bard, Bing Chat, and ChatGPT-3.5 to write a biography of Nelson Mandela. ChatGPT and Bard had the most similar responses, at 558 words and 452 words, respectively. Bing Chat’s was 129 words. 

What did Bard and ChatGPT say in all those words? Their approaches differed slightly, but ultimately the information was nearly identical. Bard’s opening sentence cuts right to the chase, about why Mandela was a significant figure. ChatGPT starts from the beginning of Mandela’s story, when he was born, and goes on to unfold the events over his lifetime.

  • Bard: Nelson Mandela was a South African anti-apartheid revolutionary, political leader, and philanthropist who served as President of South Africa from 1994 to 1999. 
  • ChatGPT: Nelson Rolihlahla Mandela, also known as Madiba, was born on July 18, 1918, in the village of Mvezo, Transkei, South Africa. 

On the first read, I found ChatGPT’s answer to be more impressive and human-like in its storytelling. Bard’s felt more like a regurgitation of the Wikipedia article on Mandela, which has a nearly identical first paragraph.

However, both would need tweaks and embellishments to get to a satisfactory human-written answer. Bard made it a bit easier to quickly edit with the addition of a bulleted list of Mandela’s accomplishments, whereas ChatGPT used all paragraphs that would take longer to read and manipulate. That said, not all of Bard’s bullets were high-quality examples of accomplishments. One was, “Died on December 5, 2013.” As with all chatbots, Bard’s output is a starting point that requires a human editor.

Bard also offered three drafts of the essay (I reference the first version above), each with slightly different details. For example, only one draft mentioned Mandela’s work combating poverty and HIV/AIDs. 

I prefer that Bard gives three draft options to ChatGPT’s singular, polished answer. It’s a warranted reminder that the system has to selectively choose what information to present, and not everything makes the cut. It’s always important to continue the research on your own and not limit your knowledge of a subject to a canned, AI-generated answer—no matter how convincing and all-knowing it seems.

Bard Cites Its Sources, But They’re Questionable at Best

Given the serious and present-day concerns over generative AI tools spreading misinformation, Bard scores points for citing its sources. This isn’t something the free version of ChatGPT (3.5) can do because it has no web connection and no ability to create links—it’s trained on a fixed set of source data. Bing Chat also cites its sources and is in my opinion the best at it of the three. 

For the Mandela answer, Bard listed three sources at the end: Wikipedia, a website that sells posters, and an article on why Mandela went to prison. While having citations is a step up from ChatGPT having none at all, I prefer the way Bing Chat does it. Bing Chat ends every sentence with a footnote that links to sources that are generally of higher quality. For this example, Bing Chat cited History.com, Wikipedia, and Britannica.com, all of which provide a stronger jumping-off point for me than a site that sells posters. Bing Chat also included an ad for two books on Mandela.

However, not all of Bard’s answers come with citations, and even when they do, sometimes they are completely nonsensical. When I asked Bard to write a biography of Martin Luther King, Jr., it generated an essay that seemed of similar quality and tone to Mandela’s, yet claimed to be pulled from this blank URL, a product listing for MLK memorabilia, and a GoDaddy domain that’s up for sale. The citations are disconnected from the content, making me question whether Bard actually uses the sources it cites, or if it’s lip service to appear trustworthy, or some murky combination.

Bard Is More of an Aggregator Than a Creative Thinker

Bard is pretty solid at aggregating information from its sources but struggles to generate quality, new information.

When I asked it to write a TV scene of a woman rescuing a dog, the same question I asked ChatGPT and Bing Chat, it returned a 280-word script with basic, one-character dialogue that would appeal to a toddler at best. Bing Chat’s was similarly simplistic, and painfully cliché. ChatGPT’s version was the best, with double the word count (550), with four characters and three settings. It was the only one to interpret the word “rescue” to mean adopt from a shelter, not save from being stuck outside.

Another thing Bard cannot do: Generate AI images, which Bing Chat can do for free thanks to a partnership with OpenAI’s Dall-E software. In general, while Bard might be good for administrative tasks like writing emails, coding small programs, and creating tables, I recommend ChatGPT for creative writing and Bing Chat if you’d like to play with AI-generated images.

Coding With Google Bard

Bard can also help generate code in at least 20 programming languages. At the company’s annual I/O conference, one speaker noted that coding assistance is one of the most popular uses of Bard. Like ChatGPT, it promises the ability to debug code, explain the issues, and generate small programs for you to copy. But it goes a step further than ChatGPT with the ability to export to Google Colab and, as of a mid-July update, Replit for Python.

However, Google warns against using the code without verifying it. “Yes, Bard can help with coding and topics about coding, but Bard is still experimental and you are responsible for your use of code or coding explanations,” an FAQ reads. “So you should use discretion and carefully test and review all code for errors, bugs, and vulnerabilities before relying on it.” But unlike inaccurate text, code can be easier to fact-check. Run it and see if it works.

Google also recently admitted Bard is subpar at math and logic-based word problems, and says it’s now 30% better. “Thirty percent better than ‘very poor’ is still very poor,” a mathematician reader recently emailed me. He’d been experimenting by asking ChatGPT and Bard complex questions and concluded that “Bard is terrible at math,” but ChatGPT wasn’t much better.

It’s hit or miss. In the example below, it said a number I entered was not prime because it was divisible by several numbers, which was wrong. ChatGPT, however, correctly said it is prime, though both presented their answers with equal confidence.

Researching, Shopping, and Travel Planning With Bard

When using the web for research, I prefer Bard over Bing Chat and ChatGPT. Bing Chat is only available in Edge (not my preferred browser), and ChatGPT does not cite sources or have information past 2021. So, Bard is the best option for me if I’m going to use a chatbot, though I generally still prefer a traditional Google search.

I asked Bard and ChatGPT for recommendations on what to do in New York City this weekend. Bard listed specific upcoming activities, with photos and links to learn more. ChatGPT, which lacks a web connection, gave generic suggestions like “visit the Statue of Liberty” in a few paragraphs (no images).

Both are helpful, but Bard gets you closer to booking a more customized trip, especially after some back and forth (“What bands are playing in Brooklyn this weekend?”). The same goes for shopping recommendations and restaurant reviews. For more information, Bard’s Google It button pulls up a traditional search results page.

While I generally do not use chatbots for work research as they are difficult to fact-check, on one occasion I did use Bard and founds its answer to be impressive and something I could not have done on my own or with a typical Google Search.

I needed to know how many Tesla Supercharger locations there are in the US. The Tesla website contains a long list of locations, but lacks a total count. I sent the link to Bard and asked it to count the unique locations on the page. It returned an answer: 3,478. This was too big for me to fact-check on my own without spending a whole day on it, though I was able to corroborate its answer with what other publications had reported to achieve a higher degree of confidence in the information (though not enough to cite it in a published piece.)

While I see a lot of potential in using AI for online search, I still default to a traditional Google search, as I end up there anyway for fact-checking and expanded research. Plus, browsing links and letting my brain pick out the information it needs, not letting Bard do it for me, is faster and I feel more in control.

Is Bard the Future of Google Search?

The query above is the million-dollar question for Google, which fields 93% of global search traffic. Google has said it plans to add a Bard-like component to the core Google search result page at some point in the future, and it showed a demo of it in May that sent chills up the journalism industry’s spine.

I’ve been using the beta version of the new Search Generative Experience (SGE), as it’s called, for more than a month, which I signed up for via a waitlist in Google Labs. It’s not quite the same as Bard, but it’s similar. But mostly, it’s a modified version of the mini summaries that already appear for some searches, just built out and with more links. Below, the orange section is the new AI-generated summary. The rest is what a typical Google user would see.

I tend to gloss over the AI-generated section most of the time, though sometimes it catches my eye as I scroll to the links, which are harder to get to because SGE takes up so much space. I find its results limiting, however, like Bard. They are more of a primer for what I may want to explore on my own in the links below, unless it’s a very simple question.

Some version of SGE is likely to appear on Google in the next year, I’d wager, though the company has not committed and the product is not yet openly available to the public. But if and when Google releases it widely, it’s unclear what will happen to Bard. For that reason, I find it hard to commit too much time getting to know it. If you’re looking for a more permanent product, right now ChatGPT or Bing Chat seem like better options.

Bard Is an Evolving, Promising Chatbot

Given what I believe should be the goal of general-use AI chatbots at this point in their short existence—that is, as a helpful starting point that requires human judgment and editing to be useful—Bard is the best option I’ve tested. It can speed up administrative tasks and research thanks to its ability to export responses to Gmail, Docs, Sheets, and Colab, where you can clean up the results and finish your task in less time.

As far as its weaknesses, Bard pales in comparison with ChatGPT for creative writing. Bing Chat has a leg up due to its image generation capabilities and superior source citing, making it easier to fact-check and build upon its research than Bard’s shifty citations. It struggles with accuracy, though all three do. At least Bard presents multiple draft options, which mildly increase transparency for the otherwise incredibly opaque system, though some may find it verbose compared with Bing Chat’s restrained answers. Overall, all three have distinct pros and cons, and no one chatbot emerges as a clear Editors’ Choice winner.

Originally Appeared Here

You May Also Like

About the Author:

Early Bird