AI Made Friendly HERE

The internet is now Google Bard’s to consume

Google Bard concept art mockup

Google has updated its privacy policy, a routine and frequent measure that would usually be entirely unremarkable — usually. However, Google’s privacy policy update is unique in the sense that the alteration now affords its AI chatbot, Bard, access to just about everything publicly available online.

If you can find it with a Google Search, it’s fair game. That’s what the new policy implies, at least.

Google’s updated privacy policy: What’s new?

The majority of the search engine giant’s privacy policy remains untouched, with the standard “Google uses collected information to improve our services” spiel as its backbone. The most notable change is snuck into Google’s explanation of how it will make use of public information, with the policy stating:

“We may collect information that’s publicly available online or from other public sources to help train Google’s AI models and build products and features like Google Translate, Bard, and Cloud AI capabilities.”

The policy’s previous wording indicated Google’s intent to use public information to train Google Translate. However, the company’s scope has seemingly expanded since then, requiring them to amend the policy with its desire to use public information to train “AI models” or build “products” like Bard and Cloud AI features.

Google’s updated privacy policy: What does it mean?

When a company publishes a privacy policy, it’s usually a way to signal how it plans to use or protect information gained through direct use of one of its services.

Google’s policy does that too, before also coveting the public internet at large, unashamedly claiming the world wide web as good for harvesting, processing, and force-feeding into its AI projects like Bard.

If you’ve ever made a public post online, you’ve been reasonably aware of its visibility. A browsing user or indexing search engine could come along and see the post and cite it in replies or results.

With the era of Large Language Model (LLM) AI chatbots, things are quite different. That information could now be consumed en masse, digested, and regurgitated to others under the guise of an intelligent artificially-crafted response.

Story continues

Frankly, that’s something none of us anticipated when flexing our 14-year-old insights onto how Linkin Park and Limp Bizkit are the Beatles and Rolling Stones of our era on some long-forgotten Angelfire blog of yesteryear.

Does Google have the right to do this? Yes. Sort of. Technically, a private entity like Google faces little to no restrictions on what it can do with information or data collected from a public entity.

It’s the basis of how the Google Search Engine works after all — scraping through billions of public webpages daily to index into its megalithic databanks. But just because Google can do this, won’t make people feel any easier that it intends to.

Outlook

More and more questions are being raised about the ethics and legality of the training of AI based on public information, and while there are no legal roadblocks standing in Google’s way, maybe it’s time there should be.

For everything AI can do, it can’t yet truly create — only interpret and imitate. As such, there’s no guarantee on how your words, your images, your videos, or your voice can be used during this process.

I find it fascinating, if not a little disturbing, that a company would be willing to offer its chatbot so much unrestricted freedom to people’s information when its own parent company Alphabet is already afraid of Bard’s loose lips when it comes to data of its own.

Originally Appeared Here

You May Also Like

About the Author:

Early Bird