AI Made Friendly HERE

AI literacy is vital to combat disinformation and preserve trust in democracy, experts say

Artificial intelligence (AI) may erode public trust in democratic processes, leading experts said during the DemocracAI panel hosted by the Science, Technology and Society (STS) program on Democracy Day.

The panel featured STS director Paul Edwards, Center for Ethics in Society fellow Wanheng Hu, visiting professor Florence G’Sell from Sciences Po in France and visiting professor Iris Eisenberger from the University of Vienna in Austria.

“Democracy is not just about people choosing their rulers through elections,” said Jacob Hellman, an STS lecturer who kicked off the discussion.

Rather, he said, democracy depends on “discussion about how we want to live together. Controlling new forms of technology [is] part of how we choose to live together.”

The panelists considered how the latest innovations in AI, including generative models that can produce realistic text and images, could harm democratic norms and reshape politics.

Focusing on disinformation, Edwards noted that the rise of generative AI could allow malignant actors to reduce trust in democratic institutions. Misleading political content is more easily produced with generative AI than with traditional tools like Photoshop. While AI is not perfect, “errors are becoming more subtle and harder to detect” — especially when it comes to fake posts on social media, Edwards said.

Hu highlighted the decline of credibility of online posters over time. Certain populations, such as older adults and people with less digital experience, might place too much trust in fake content online, Hu said.

Before the age of social media, digital content possessed credibility because only trustworthy organizations had the ability to release digital information. Panelists stressed people might not recognize that, now, anybody can post online. Realistic but fake content can exploit that misunderstanding, Hu said.

G’Sell and Eisenberger emphasized the need for balanced AI regulation, acknowledging the role AI could play in combating disinformation. G’Sell said that private platforms have successfully used newer AI models to identify and combat AI-generated content. Edwards agreed that platforms can use these technologies to block much more fake content than what falls through the cracks.

Eisenberger cautioned, however, that platforms could still use AI to strengthen targeted messaging and advertising, which could worsen filter bubbles and the public’s susceptibility to polarizing disinformation.

All the panelists agreed, ultimately, that combating these issues requires increased AI literacy. While the public does not need a technical understanding of how AI models work, Hellman said, teaching people about spotting disinformation — from odd details in an AI-generated image to the characteristics of a bot account — can protect trust in democratic institutions.

Originally Appeared Here

You May Also Like

About the Author:

Early Bird