AI Made Friendly HERE

AI Content Government Regulation Survey: Consumers Support Labeling

A new HarrisX survey shows two stark, parallel trends: U.S. consumers’ unease with the increasing sophistication of AI-generated content and their belief that the federal government must create guardrails.

A majority of respondents surveyed in early March want regulation to be enacted that would require video, photos, text and other formats to be labeled in order to identify what is generated by AI.

“Fully AI-created video” was the AI content that commanded the strongest response, with 74% of U.S. adults saying the government should require it to be labeled. The same survey also included a test in which respondents were shown eight unlabeled videos and asked to guess whether they were generated by AI or humans. They often struggled to answer correctly.

Even for music, captions and sounds, over 6 in 10 respondents felt this content should be labeled if it was generated by AI. These results didn’t dramatically vary across age groups.

Beyond demands for regulation on content labeling, Americans were also concerned about the impact generative AI would have on employment, with 76% believing the government should enforce regulation that would give Americans job protection. Just 24% believed the alternative case that regulation would hurt innovation and new job creation.

In asking respondents to consider and rank a wide range of potential regulations the government could enforce, creating accountability rules that would hold developers responsible for AI outputs topped the list.

The HarrisX survey question about consumer preferences regarding regulations reflect a current state of affairs in which they don’t already exist in the U.S., but they could soon be forthcoming. President Biden’s executive order on AI in October 2023 urged measures to protect Americans from AI-enabled fraud and deception by establishing standard methods to detect synthetic content and authenticate “official” (non-AI-generated) content.

The order suggested that detection should be enabled by watermarking (adding cryptographic tags or metadata to outputs of AI systems). Shortly following the order, members of Congress introduced the AI Labeling Act, which would require “clear and conspicuous” disclosure of AI-generated content across all media types.  

Leading AI tech providers have voluntarily pledged to commit to watermarking and labeling AI-generated content, with notable rollouts including the Content Credentials standard developed and put forth by the Coalition for Content Provenance and Authenticity (C2PA) and Google DeepMind’s SynthID for AI-generated images. 

Still, the effectiveness of watermarking as a technical solution for proliferating deepfake content has been repeatedly proven insufficient, particularly in isolation from strong detection systems.  

Debate has circled the question of whether the liability shield of Section 230 should extend to generative AI systems, as it currently does for online platforms (such as social media) that host user-generated or third-party content. 

Yet even if regulation held gen AI services accountable for their outputs, such a proposition would be no less significant and challenging to enforce than the effort of AI content detection.  

As AI-generated and AI-modified content proliferates online, it’s clear Americans believe regulation is critical to creating guardrails around the technology. 

See the full results of the HarrisX survey data below:

Originally Appeared Here

You May Also Like

About the Author:

Early Bird