AI Made Friendly HERE

Sam Altman’s ChatGPT Has a Bias Problem That Could Get It Canceled

  • ChatGPT, like other AI tools, suffers from a bias problem that could impede corporate adoption.
  • OpenAI CEO Sam Altman acknowledge the technology has “shortcomings around bias.”
  • Corporate America won’t implement a tool that risks being accused of racism or sexism.

Loading
Something is loading.

Thanks for signing up!

Access your favorite topics in a personalized feed while you’re on the go.

download the app

OpenAI is ready to start capitalizing on ChatGPT’s buzz.

On Wednesday, the firm announced it will offer a pilot $20-a-month subscription version of the chatbot called ChatGPT Plus, which gives priority access to users during peak time and faster responses. The free version remains available but is so popular that it is often at capacity or slow to give responses.

In a clear push for commercialization, OpenAI also said it will roll out an API waitlist, different paid tiers, and business plans. OpenAI, it seems, believes enterprises will be willing to pay for its chatbot’s capabilities.

But there’s one big hurdle: Corporate America’s “woke-as-a-business-a-strategy.”

OpenAI’s CEO, Sam Altman, admitted on Wednesday that ChatGPT has “shortcomings around bias”, though he didn’t go into detail.  In practice that likely means its underlying AI model is trained in a way that means it spits out racist, sexist, or otherwise biased responses sometimes. The Intercept asked ChatGPT, for example, which airline passengers might present a bigger security risk. The bot reportedly spat out a formula that calculated an increased risk if the passenger either came from or had simply visited Syria, Iraq, Afghanistan, or North Korea.

Few big businesses with cash to throw around will subscribe to black-box technology that risks putting them in the middle of a culture war. And it’s Altman’s biggest challenge in terms of profiting from the tech.

Why ChatGPT needs to be woke 

The right-wing media ecosystem has accused ChatGPT of being too woke, saying the bot takes progressive stances on, for example, LGBT issues.

OpenAI could kowtow here to ease the headache of political scrutiny by conservatives. But that risks hurting its bottom line. The reality is that blue-chip companies remain sensitive to culture war issues, fearing bad press and losing customers. Evidence suggests they’re right: a survey of 3,000 Americans in 2021 found that the majority want CEOs to take a stance on issues such as racism and sexism. It’s good capitalism to be progressive, and the true anti-woke crowd is actually a political minority.

NYU professor and business commentator Scott Galloway explicitly laid out woke-as-a-business-strategy last year pointing, for example, to a Nike ad featuring Colin Kaepernick that referenced his taking the knee in support of Black Lives Matter.

Unfortunately for OpenAI, ChatGPT has already had several cases of bias emerge. Its release to the public in November put the technology within reach of 100 million people in just two months, according to UBS.

—steven t. piantadosi (@spiantado) December 4, 2022

 

We already have plenty of evidence that big US firms will shy away from anything that risks looking sexist or racist — not least from OpenAI’s own major financial backer, Microsoft.

Microsoft, which has put an estimated $10 billion into OpenAI, released a chatbot on Twitter named Tay in 2016, which quickly turned into a xenophobe that spouted racial slurs.

The company shut it down and offered an apology for “the unintended offensive and hurtful tweets from Tay.”

Firms could push OpenAI to be more transparent about training data

Altman said repairing ChatGPT’s biases will be “harder than it sounds and will take us time to get right.”

Professor Michael Wooldridge, director of foundation AI research at the Turing Institute, told Insider that bots like ChatGPT, which are trained on vast amounts of data, suffer biases for several reasons.

For one, Wooldridge notes that “white, male, college-educated Americans” make up the main demographic of people building AI systems, so any biases they carry may feed into the bot. 

Another problem: All humans are kinda biased.

“I think a lot of researchers would argue that actually, the more general problem is that however you get your training data, you’re absorbing societal biases even if it’s from a wide pool of people,” Wooldridge said.

OpenAI has not given detailed information on what data has been used to train GPT-3.5, the model underpinning ChatGPT, though Wooldridge notes that it’s likely to encompass the entirety of the web.

“That means all of Reddit, all of Twitter, every piece of digital text that they can get their hands on,” he said. “You don’t have to think very hard to realize there’s an enormous quantity of toxic content of absolutely every variety imaginable that’s present in that training data.”

Though OpenAI has found success so far, Wooldridge could see a scenario where the firm is pushed by customers to reveal its training data. It’s unlikely to immediately solve ChatGPT’s bias, but transparency and scrutiny may end up being better for OpenAI’s bottom line.

Originally Appeared Here

You May Also Like

About the Author:

Early Bird