AI Made Friendly HERE

ChatGPT Has the FTC Worried About AI Scams and Hype

  • Tech companies and investors think generative AI will be an economic revolution.
  • The Federal Trade Commission warned companies that overhyping their products could violate the law.
  • The technology can make it easier to defraud voters and consumers.

Loading
Something is loading.

Thanks for signing up!

Access your favorite topics in a personalized feed while you’re on the go.

download the app

Where tech companies and investors see gold, regulators see potential for overhyped marketing and fraud.

Since its November 2022 launch, ChatGPT has mesmerized millions and sparked an arms race among big tech companies and venture capital-fueled startups to revolutionize the way humans interact with computers. However, the Federal Trade Commission is concerned companies are exaggerating the AI revolution while giving scam artists more powerful tools to defraud the public.

Mapping human language from troves of data, ChatGPT can generate novel responses to short prompts instead of just spitting out existing information. Such chatbots fall under the umbrella of generative artificial intelligence, which includes technologies that create images, sounds, and other media. 

Investors say generative AI could save workers loads of unnecessary labor. Even before ChatGPT came out, Sequoia Capital claimed generative AI can make knowledge workers “at least 10% more efficient and/or creative” with “the potential to generate trillions of dollars of economic value.” 

In recent months, Microsoft made a $10 billion deal with ChatGPT parent OpenAI to revive Bing search with generative AI, Meta revamped efforts to integrate the technology into all its products like Facebook and Instagram, and Google announced its rival chatbot, Bard, and plans to integrate generative AI into its flagship search engine, Chrome. 

“AI will fundamentally change every software category, starting with the largest category of all – search,” said Microsoft chairman and CEO, Satya Nadella, when the company rolled out its new, AI-powered Bing search engine and Edge browser in February.

But the Federal Trade Commission (FTC) is skeptical of these types of marketing claims — on top of agency concerns about AI’s potential to introduce online bias, discrimination, and other consumer harms.

“Marketers should know that — for FTC enforcement purposes — false or unsubstantiated claims about a product’s efficacy are our bread and butter,” Michael Atleson, an attorney in FTC Division of Advertising Practices, wrote in a blog post Monday. 

Having beefed up its investigative abilities in February with the newly created Office of Technology, the agency said it plans to track exaggerated promises about what an AI product may do, its superiority to non-AI competition, and the extent to which company or product actually uses AI technology. The agency is also watching companies who fail to foresee and mitigate risks.

The technology has already showed serious limitations, so the hype has real consequences. Bing’s AI chatbot didn’t know what year it is, insulted users, and even claimed to love one Insider reporter. When Google’s AI chatbot made a factual error in an ad, its parent company Alphabet’s stock sank 9%.

While AI products may hold less promise than companies say, the FTC is worried they will make it easier to deceive voters and consumers.

“We’re also concerned with the risk that deepfakes and other AI-based synthetic media, which are becoming easier to create and disseminate, will be used for fraud,” FTC spokesperson Juliana Gruenwald told Insider.

In June 2022, the FTC recommended Congress pass laws so AI tools do not cause additional harm.

“The FTC has already seen a staggering rise in fraud on social media,” she said. “AI tools that generate authentic-seeming videos, photos, audio, and text could supercharge this trend, allowing fraudsters greater reach and speed,” from imposter scams and identity theft to payment fraud and fake website creation. Chatbots could exacerbate these trends, Gruenwald said.

While the FTC did not provide specific examples of AI-powered fraud, it’s not hard to imagine how people could be fooled. Already, AI-generated audio of Joe Biden and Donald Trump trash-talking while gaming has gone viral on TikTok. Combined with deepfake video and speech-mimicking chatbots, bad actors could mislead the public with faked footage of politicians in the lead up to an election. Similarly, phishing calls and emails could start sounding more like humans instead of robots, giving hackers new tricks to rob consumer and company finances.

Aware of these threats, Gruenwald didn’t mince words: “We’re prepared to hold – and have held – accountable the firms and individuals who engage in these practices.”

If the FTC’s instincts are right, today’s generative AI may pose less economic disruption than deception.

Originally Appeared Here

You May Also Like

About the Author:

Early Bird