Listen to this article
Estimated 6 minutes
The audio version of this article is generated by text-to-speech, a technology based on artificial intelligence.
Non-profit consumer advocacy group Public Citizen demanded in a Tuesday letter that OpenAI withdraw its video-generation software Sora 2 after the application sparked fears about the spread of misinformation and privacy violations.
The letter, addressed to the company and CEO Sam Altman, accused OpenAI of hastily releasing the app so that it could launch ahead of competitors.
That showed a “consistent and dangerous pattern of OpenAI rushing to market with a product that is either inherently unsafe or lacking in needed guardrails,” the watchdog group said.
Sora 2, the letter says, shows a “reckless disregard” for product safety and people’s rights to their own likeness. It also contributes to the broader undermining of the public’s trust in the authenticity of online content, it argued.
The group also sent the letter to the U.S. Congress.
OpenAI didn’t immediately respond to a request for comment Tuesday.
More responsive to complaints about celebrity content
The typical Sora video is designed to be amusing enough for you to click and share on platforms such as TikTok, Instagram, X and Facebook.
It could be the late Queen Elizabeth II rapping or something more ordinary and believable. One popular Sora genre depicts fake doorbell camera footage capturing something slightly uncanny — say, a boa constrictor on the porch or an alligator approaching an unfazed child — and ends with a mildly shocking image, such as a grandma shouting as she beats the animal with a broom.
LISTEN | AI video app Sora 2 is here. Can you tell what’s real?:
The Current24:17The new AI video app Sora is here: Can you tell what’s real?
Whether it’s your best friend riding a unicorn, Michael Jackson teaching math, or Martin Luther King Junior dreaming about selling vacation packages — it’s now easier and faster to turn those ideas into realistic videos, using the new AI app, Sora. The company behind it, OpenAI, promises guardrails to prevent against violence, and fraud — but many critics worry that the app could push misinformation into overdrive… and pollute society with even more “AI slop.”
Public Citizen joins a growing chorus of advocacy groups, academics and experts ]raising alarms about the dangers of letting people create AI videos based on just about anything they can type into a prompt, leading to the proliferation of non-consensual images and realistic deepfakes in a sea of less harmful “AI slop.”
OpenAI has cracked down on AI creations of public figures doing outlandish things — among them, Michael Jackson, Martin Luther King Jr. and Mister Rogers — but only after an outcry from family estates and an actors’ union.
“Our biggest concern is the potential threat to democracy,” said Public Citizen tech policy advocate J.B. Branch in an interview.
“I think we’re entering a world in which people can’t really trust what they see. And we’re starting to see strategies in politics where the first image, the first video that gets released, is what people remember.”
Guardrails haven’t stopped harassment
Branch, who penned Tuesday’s letter, also sees broader threats to people’s privacy and says those could disproportionately impact certain groups.
WATCH | How Denmark is trying to stop unauthorized deepfakes:
How Denmark is trying to stop unauthorized deepfakes
AI-generated videos are everywhere online, but what happens when your image or voice is replicated without your permission? CBC’s Ashley Fraser breaks down how Denmark is trying to reshape digital identity protection and how Canada’s laws compare.
OpenAI blocks nudity but Branch said that “women are seeing themselves being harassed online” in other ways.
Fetishized niche content has made it through the app’s restrictions. The news outlet 404 Media on Friday reported on a flood of Sora-made videos of women being strangled.
OpenAI introduced its new Sora app on iPhones more than a month ago. It launched on Android phones last week in the U.S., Canada and in several Asian countries, including Japan and South Korea.
Much of the strongest pushback against it has come from Hollywood and other entertainment interests, including the Japanese manga industry.
OpenAI announced its first big changes just days after the release, saying “overmoderation is super frustrating” for users but that it’s important to be conservative “while the world is still adjusting to this new technology.”
That was followed by publicly announced agreements with Martin Luther King Jr.’s family on Oct. 16, preventing “disrespectful depictions” of the civil rights leader while the company worked on better safeguards, and another on Oct. 20 with Breaking Bad actor Bryan Cranston, the SAG AFTRA union and talent agencies.
“That’s all well and good if you’re famous,” Branch said. “It’s sort of just a pattern that OpenAI has where they’re willing to respond to the outrage of a very small population. They’re willing to release something and apologize afterwards. But a lot of these issues are design choices that they can make before releasing.”
WATCH | AI generated ‘actress’ Tilly Norwood draws backlash:
AI-generated ‘actress’ Tilly Norwood draws Hollywood backlash
European AI production company Particle6 says their AI-creation Tilly Norwood has generated a lot of interest, but Hollywood actors including Emily Blunt, Melissa Barrera and Whoopi Goldberg as well as the SAG-AFTRA union have come out against the AI character.
Lawsuits against ChatGPT ongoing
OpenAI has faced similar complaints about its flagship product, ChatGPT. Seven new lawsuits filed last week in California courts claim the chatbot drove people to suicide and harmful delusions even when they had no prior mental health issues.
Filed on behalf of six adults and one teenager by the Social Media Victims Law Center and Tech Justice Law Project, the lawsuits claim that OpenAI knowingly released GPT-4o prematurely last year, despite internal warnings that it was dangerously sycophantic and psychologically manipulative. Four of the victims died by suicide.
Public Citizen was not involved in the lawsuits, but Branch said he sees parallels with how Sora was released.
“Much of this seems foreseeable,” he said. “But they’d rather get a product out there, get people downloading it, get people who are addicted to it rather than doing the right thing and stress-testing these things beforehand and worrying about the plight of everyday users.”
OpenAI responds to anime creators, video game makers
OpenAI spent last week responding to complaints about Sora from a Japanese trade association representing famed animators such as Hayao Miyazaki’s Studio Ghibli and video game makers Bandai Namco, Square Enix and others.
OpenAI defended the app’s wide-ranging ability to create fake videos based on popular characters, saying many anime fans want to interact with their favourite characters.
But the company also said it has put guardrails in place to prevent well-known characters from being generated without the consent of the people who own the copyrights.
“We’re engaging directly with studios and rights holders, listening to feedback and learning from how people are using Sora 2, including in Japan, where cultural and creative industries are deeply valued,” OpenAI said in a statement about the trade group’s letter last week.
