AI Made Friendly HERE

YouTube expresses concern over OpenAI’s video training approach for Sora

Next Article

OpenAI uses a vast amount of internet-sourced video content to train its AI model

What’s the story

Neal Mohan, the CEO of YouTube, has publicly expressed his apprehension regarding OpenAI’s potential use of YouTube videos to improve its AI-powered video creation tool, Sora.

During an interview with Emily Chang, host of Bloomberg Originals, Mohan stated that if OpenAI has indeed used YouTube videos for this purpose, it would be a “clear violation” of YouTube’s terms of use.

OpenAI’s content sourcing raises eyebrows

The controversy surrounding the sources OpenAI employs to train its AI models is not a new issue.

Tools like Sora and other AI tools function by consuming diverse content from across the web.

This vast data then serves as the basis for these tools to produce new content, including photos, videos, narrative text, and more.

There have also been reports suggesting that OpenAI is contemplating training its next-gen Large Language Model (LLM), GPT-5, using transcriptions of public YouTube videos.

Google’s stance on AI training using YouTube content

Mohan clarified Google’s position on using YouTube’s content in training its own powerful AI model, Gemini.

He stated that while a portion of YouTube’s content may be used to train AI models like Gemini, YouTube and Google ensure compliance with the terms of service or contract agreed upon by the creator.

This statement underscores the importance of adhering to platform rules even when utilizing user-generated content for technological advancements.

OpenAI’s new voice engine raises safety concerns

OpenAI recently unveiled Voice Engine, an AI model that mimics any voice in any language using a brief audio sample.

However, due to safety concerns, it has not been made available for public use.

The company acknowledged the risks associated with such technology and stated it is collaborating with stakeholders to ensure responsible use.

OpenAI’s usage policies forbid impersonating other organizations and individuals without legal rights or consent, and mandate partners to clearly disclose that the voices heard are AI-generated.

Originally Appeared Here

You May Also Like

About the Author:

Early Bird