AI Made Friendly HERE

Elon Musk Takes Jab at ChatGPT as Propaganda Machine: ‘We Need TruthGPT’

Elon Musk is pushing back on ChatGPT’s growing popularity, saying the AI program isn’t safe for mainstream public use due to its potential to spout lies and alleged propaganda. 

“What we need is TruthGPT,” he tweeted(Opens in a new window) on Friday. 

On Twitter, Musk has been calling out the flaws of ChatGPT, which is being integrated into Microsoft’s Bing search engine to enhance the experience. “Agreed! It is clearly not safe yet,” Musk wrote(Opens in a new window) in response to a tweet that called on Microsoft to shut down ChatGPT in Bing. 

Tweet(Opens in a new window)

The exchange is perhaps a bit ironic since the initial tweet came from an online personality who is not exactly known for his truth-telling(Opens in a new window). Both Twitter as a platform and Musk himself have also been accused of spouting misinformation.

Still, it’s clear that ChatGPT is a powerful tool; it can write entire essays, summarize complex topics, and even generate computer code using a mere text prompt from the user. But in recent days, social media users have posted numerous examples of the various mistakes the AI program can make. This includes posting factual errors, becoming emotionally deranged, and refusing to respond to some politically sensitive topics while answering others.

The flaws prompted Musk to take several jabs at ChatGPT, including deriding(Opens in a new window) the AI program as a propaganda machine that could supplant mainstream media. 

Tweet(Opens in a new window)

In addition, he’s been hurling criticism at ChatGPT’s creator, OpenAI, a San Francisco company that Musk helped found before cutting ties(Opens in a new window) in 2018. 

“OpenAI was created as an open source (which is why I named it ‘Open’ AI), non-profit company to serve as a counterweight to Google, but now it has become a closed source, maximum-profit company effectively controlled by Microsoft,” Musk tweeted(Opens in a new window) today. “Not what I intended at all.”

OpenAI this week acknowledged that its process for “fine-tuning” ChatGPT is “imperfect.”

“Sometimes the fine-tuning process falls short of our intent (producing a safe and useful tool) and the user’s intent (getting a helpful output in response to a given input),” OpenAI says(Opens in a new window). “Improving our methods for aligning AI systems with human values is a top priority(Opens in a new window) for our company, particularly as AI systems become more capable.”

Musk’s critical stance isn’t a surprise. For years, he has raised the alarm bells about the dangers of AI. Back in 2014, he said: “With artificial intelligence, we are summoning the demon.” That same year, he tweeted(Opens in a new window) that AI could be “more dangerous than nukes.” 

Recommended by Our Editors

Earlier this week, Musk said it’s time for governments to intervene. “I think we need to regulate AI safety, quite frankly,” he said at the World Government Summit in Dubai. “I think we should have a similar regulatory oversight for artificial intelligence because I think it is a bigger risk to society than cars or planes or medicine.”

Musk speaking at summit in Dubai.

(Credit: World Government Summit in Dubai. )

(Speaking of cars, Tesla—where Musk also serves as CEO—issued a voluntary recall on 362,758 vehicles this week over a beta software update of its own AI, which the company says may cause the vehicles to disobey local traffic laws and increase the risk of a crash.)

The calls for regulation may grow as companies roll out AI-powered chatbots to more users across the globe. But in the meantime, there are signs that both OpenAI and Microsoft are taking the public’s feedback into consideration with ChatGPT tweaks.

In OpenAI’s case, the company is preparing an upgrade that could allow users to customize ChatGPT to address its bias on certain sensitive topics. “This will mean allowing system outputs that other people (ourselves included) may strongly disagree with,” the company said.

Microsoft, on the other hand, is considering adding more guardrails to rein in creepy responses from the ChatGPT-powered Bing, according(Opens in a new window) to The New York Times. This includes limiting the conversation lengths, which can sometimes confuse Bing into making bizarre responses.

Get Our Best Stories!

Sign up for What’s New Now to get our top stories delivered to your inbox every morning.

This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of Use and Privacy Policy. You may unsubscribe from the newsletters at any time.

Originally Appeared Here

You May Also Like

About the Author:

Early Bird