AI Made Friendly HERE

Meta to implement Elections Centre against AI content misuse

Meta CEO Mark Zuckerberg. | Image:AP

Meta will remove misinformation and misuse of AI-generated content in a bid to crack down on such information which affects voting, instigates violence and uses fact-checkers to mark out fake, altered or manipulated content.

The Facebook and Instagram parent will have an India-specific Elections Operations Centre, bringing together experts from Meta for detecting likely threats, with real-time interventions to avoid the instances across its apps and technologies. 

The social media giant said it is developing tools to mark out AI-generated images from Google, OpenAI, Microsoft, and others posted by users on Facebook, Instagram and Threads.

Meta also pledged its commitment towards election integrity efforts, as India approaches  Lok Sabha elections with voting scheduled to be held in seven phases.

Meta said it strongly believes people should know when photorealistic content has been created using AI, adding that it already marks out photorealistic images made using ‘Meta AI’ through visible markers visible on the images.

The feature also embeds invisible watermarks and metadata within the AI-generated image files.

“We are also building tools to label AI-generated images from Google, OpenAI, Microsoft, Adobe, Midjourney, and Shutterstock that users post to Facebook, Instagram and Threads,” Meta said in a blogpost titled ‘How Meta Is Preparing For Indian General Elections 2024’.

“We are dedicated to the responsible use of new technologies like GenAI and collaborating with industry stakeholders on technical standards for AI detection, as well as combating the spread of deceptive AI content in elections through the Tech Accord,” it said.

Beginning in 2024, Meta said it also requires advertisers worldwide to divulge in certain cases when AI or digital methods are used to make or change an advertisement on a political or social issue. 

This also applies to a digitally created ad containing a photorealistic image or video, or realistic-sounding audio, which was made or changed to represent an actual person as saying or doing something they did not say or do. 

This is also the case when an ad portrays a realistic-looking person who does not exist in real, or a realistic-looking event that did not take place, or represents a realistic event that allegedly occurred, but is not a true image, video, or audio recording of the event.

India is the most populous nation in the world, which is heading into elections in a month from now.
Seeing this, Meta said it will keep at its efforts to curb misinformation, remove voter interference and enhance transparency and accountability on its platforms to back free and fair elections.

Meta said it is engaged with the Election Commission of India closely, after it joined the Voluntary Code of Ethics in 2019.

This gives the commission a high-priority channel to report unlawful content to the company.

“We remove the most serious kinds of misinformation from Facebook, Instagram and Threads, such as content that could suppress voting, or contribute to imminent violence or physical harm. During the Indian elections, based on guidance from local partners, this will include false claims about someone from one religion physically harming or harassing another person or group from a different religion,” Meta said in the blogpost.

As elections approach, Meta said it will make it easier for all its fact-checking partners across India to find and rate election-related content.

Meta will utilise keyword detection for fact checkers to be able to easily find and rate misinformation.

“Indian fact-checking partners are the first amongst our global network of fact checkers to have access to Meta Content Library,” it said.

Underlining efforts around battling the risks coming up GenAI misuse, Meta said it is aware of concerns around AI-generated content being misuse for spreading misinformation, actively monitoring new trends in content for updating its policies.

“When we find content that violates our Community Standards or Community Guidelines, we remove it whether it was created by AI or a person,” Meta said.

AI-generated content, Meta said, is also eligible for review and rating by its network of independent fact-checkers.

Fact-checking partners for the company are equipped in visual verification techniques like reverse image searching and analysing the image metadata, indicating when and where the photo or video was taken.

“They can rate a piece of content as ‘Altered’, which includes ‘faked, manipulated or transformed audio, video, or photos’. Once a piece of content is rated as ‘altered’, or we detect it as near identical, it appears lower in Feed on Facebook,” the blogpost said.

Meta said that on Instagram, altered content gets filtered out of the explore tab and is featured less prominently in feed and stories.

(With PTI Inputs)

Originally Appeared Here

You May Also Like

About the Author:

Early Bird