AI Made Friendly HERE

Google Staff Tried to Block Bard Release Over Accuracy Concerns

  • Some Google employees have been raising the alarm about the company’s AI development, per the NYT.
  • Two workers tried to stop the company from releasing its AI chatbot, Bard, the publication reported.
  • The pair were concerned the chatbot generated dangerous or false statements, it added.

Loading
Something is loading.

Thanks for signing up!

Access your favorite topics in a personalized feed while you’re on the go.

download the app

Some Google employees have been raising the alarm about the company’s artificial intelligence development, The New York Times reported.

According to the Times report, two Google employees who were tasked with reviewing AI products tried to stop the company from releasing its AI chatbot, Bard. The pair were concerned the chatbot generated dangerous or false statements, per the report.

In March, the two reviewers working under Jen Gennai, the director of Google’s Responsible Innovation group, recommended blocking Bard’s release in a risk evaluation, two people familiar with the process told the publication. The employees felt that despite safeguards, the chatbot was not ready, it added. 

However, The New York Times reported that the people it spoke to told them Gennai altered the document to remove the recommendation and downplay the risks of the chatbot.

Gennai told Insider she had “added to the list of potential risks from the reviewers and escalated the resulting analysis” to a committee of senior product, research, and business leads which determined it was appropriate to move forward for a limited experimental launch. She told The Times reviewers were not supposed to weigh in on whether to proceed. 

A representative for Google told Insider: “We’re pleased with the early reception of our experiment with Bard, even as we keep improving it, with commentators and users widely recognizing that it’s been released conservatively, with significant caution and limits.”

In recent months, the tech world has been rushing to deploy generative AI products. The race was seemingly prompted by the release and viral popularity of OpenAI’s ChatGPT, but the speed of development is raising alarm elsewhere. 

In March, several AI heavyweights signed an open letter calling for a six-month pause on advanced AI development. The letter said AI companies were locked in an “out-of-control race” and cited profound risks to society from the advanced technology.

John Burden, one of the letter’s signatories and research associate at The Centre for the Study of Existential Risk, previously told Insider the rate of AI development had picked up at an unprecedented speed.

“Things that five years ago would have seemed unrealistic to expect in the next decade have come and gone,” he said. “On a bigger scale, we just aren’t ready for the impact that this technology might have — considering we don’t really know how these models are doing what they are doing.”

Originally Appeared Here

You May Also Like

About the Author:

Early Bird