AI Made Friendly HERE

How one answer from Google’s AI wiped $100 billion off its value – and why it could get worse

(PENSILVANIA) (PA Archive)

They were 14 words that could have cost Google $100 billion, and prompted innumerable jokes.

Nasa’s recently launched James Webb Space Telescope “took the very first pictures of a planet outside of our own solar system”, Google’s new bard system claimed in an example conversation posted by Google.

But it didn’t. As Nasa itself has made clear, the first pictures of a world outside of our solar system were taken by the European Southern Observatory, in 2004.

It led to mockery from people and organisations including the ESO itself.

But the jokes were a reminder of something altogether more serious, and seriously worrying both for Google and other companies that are betting their future on artificial intelligence.

Google’s announcement of its Bard chatbot was intended as a way for it to show off how it was preparing for that AI future. Despite the fact that Google had announced it would be orienting itself around AI, in recent weeks it has faced questions over the fact that it seems to have allowed itself to be overtaken by competitors such as OpenAI’s viral ChatGPT – and in announcing Bard it appeared to be suggesting that it was preparing to catch back up.

In its announcement, it said that it would be working to try and ensure that the system’s answers were accurate. It would be looking to “combine external feedback with our own internal testing to make sure Bard’s responses meet a high bar for quality, safety and groundedness in real-world information”.

But it was the failure to do so that was perhaps the most-discussed takeaway from the announcement. Its mistake about the exoplanet images, combined with another AI announcement event that included no real breakthroughs and was largely tepidly received, caused the company’s share price to fall.

Bard’s error was so notable in part because Google has suggested that the system is not just interesting in itself but as a replacement for search. It could synthesise information so that users’ can be given specific answers to their questions, for instance, rather than having to look for that information on a more general web page – but that can only work if the system is accurate.

Story continues

Its error may have been minor but people repeatedly turn to Google for grave and serious issues, such as health advice. The company has said that is already using AI to spot when people’s searches might suggest they are in a mental health crisis, and offer them resources – which shows the importance of some of the things that people are searching Google for, and why it must get the answers right.

Such mistakes might be more common as the world comes to rely more on artificial intelligence systems like ChatGPT and Bard. Such systems are convincing, and confident – but they have little way of knowing whether they are correct.

Such systems are trained on real-world text that has been posted across the internet. As such, they will copy any mistakes that original authors of that text have made: Bard may simply have read another erroneous article about images of exoplanets, or understood a correct one in the wrong way, and repeated that mistake.

Sometimes, they will come up with mistakes all of their own accord. ChatGPT has been repeatedly found to be unable to properly do even basic maths, for instance, presumably because it is unable to understand the concepts in the same way a human or even calculator can.

All of those mistakes are presented in the same confident tone that Bard was using when it made its mistake, which means that it can feel convincing even when it is wrong. ChatGPT has been characterised as the ultimate “bulls***” artist, in that it tends to answer firmly whether it is correct or not.

Even ChatGPT suggests that users do not trust it too much. Asked by The Independent whether people should rely on it, it encouraged scepticism about what it says.

“As a language model developed by OpenAI, ChatGPT can provide helpful and informative responses to a wide range of questions. However, it is important to keep in mind that ChatGPT is not a substitute for professional advice and that its responses are generated based on patterns in the data it was trained on,” it said.

“In some cases, the information provided by ChatGPT may be inaccurate or out-of-date. Therefore, it’s always a good idea to verify any information obtained from ChatGPT or any AI-powered tool with multiple sources.

“In general, ChatGPT is best used as a tool to assist in finding information and generating new ideas, but not as a sole source of truth.”

Originally Appeared Here

You May Also Like

About the Author:

Early Bird