AI Made Friendly HERE

Generative AI will create a ‘tsunami of disinformation’ during the 2024 election [Video]

With the 2024 US presidential election just under a year away, AI experts are sounding a warning about the potential for a massive upswing in disinformation and misinformation campaigns using newly available generative AI technologies.

“There’s going to be a tsunami of disinformation in the upcoming election,” Darrell West, senior fellow at the Center for Technology Innovation at the Brookings Institution, told Yahoo Finance.

“Basically, anybody can use AI to create fake videos and audio tapes. And it’s going to be almost impossible to distinguish the real from the fake,” he said.

Generative AI technologies exploded in popularity with the launch of ChatGPT in November 2022. Since then, major tech companies including Microsoft (MSFT), which invested billions in ChatGPT creator OpenAI, Amazon (AMZN), Google (GOOG, GOOGL), Meta (META), and Adobe (ADBE) have debuted or announced they’re working on their own AI platforms.

And while those companies have strict rules around how you can use their generative AI apps, experts fear that other forms of generative AI services designed to create text, images, and video could give malicious actors or state-sponsored entities an easy means to sow discord among potential voters. And it could put off some Americans from trusting news sources entirely.

What’s more, it will put more pressure on social media companies to take the kind of stand against disinformation they held around the 2020 election, when they worked to remove fake content related to the attack on the Capitol and outcome of the election itself.

US Vice President Kamala Harris delivers a policy speech on the Biden-Harris Administration’s vision for the future of Artificial Intelligence at the US Embassy in London, Nov. 1, 2023. (Kin Cheung/AP Photo) (ASSOCIATED PRESS)

“It absolutely is going to be wild,” said Jen Golbeck, a professor of information studies at the University of Maryland. “They’re going to see this glut of shared-in-bad-faith content from people on social media and just kind of throw their hands up and be like, ‘It’s all a mess. I don’t even know.’ And then entirely disengage from the really important issues that are going on.”

Generative AI is already being used to create fake political images

Disinformation and misinformation aren’t new. There’s so much online already that double-checking anything you see on social media, or that weird Uncle Bill tells you, is a must. But generative AI will allow bad actors to create new phony content at a much faster pace than before.

The idea of generative AI being used in a political context isn’t some unfounded fear or far-off eventuality. It’s happening already. In the days leading up to former President Trump being arrested in New York in April, a slew of images depicting him running from police and being gang tackled splashed across social media sites around the world.

Then in May, an image of an explosion outside of the Pentagon sent a shock through the stock market before it was determined to be a fake and likely generated using AI. Generative AI images and video have also been used to manipulate public opinion in both the wars in Ukraine and Gaza.

“There has been a lot in the past few weeks of disinformation spread on social media about the conflict happening in Palestine and that is something that I think can be completely accelerated by AI generated text and images,” Derek Leben, associate teaching professor of ethics at Carnegie Mellon University’s Tepper School of Business, told Yahoo Finance.

“So yes, I think that it is justified to be very concerned about this. Unfortunately, there are not a lot of easy solutions to this problem. Even if the platforms are trying to crack down on it, even if the regulators are trying to crack down on it, it is by its very nature very difficult to detect and therefore very difficult to enforce any kind of restrictions on,” Leben added.

Confident that this picture claiming to show an “explosion near the pentagon” is AI generated.

Check out the frontage of the building, and the way the fence melds into the crowd barriers. There’s also no other images, videos or people posting as first hand witnesses. pic.twitter.com/t1YKQabuNL

— Nick Waters (@N_Waters89) May 22, 2023

Some political campaigns have already begun using AI too. According to ABC News, a super PAC supporting Florida Gov. Ron DeSantis released an ad using generative AI to make it appear as though former President Trump was reading posts he wrote on social media.

While it might seem easy to know when an image or video is fake or not, the sheer amount of disinformation online can, over time, erode trust in legitimate sources of information, especially among voters who aren’t partisan or constantly glued to their preferred news sites.

“The really concerning part for me is this middle space of people who aren’t super news wonks, that aren’t deeply invested in politics, that just want to get their information in an easy way, and are on social media and are seeing all of this stuff, some of which they’ll go like, ‘Well, that’s obviously fake.’ Or, ‘Man, I read that article, and it just sounds kind of crazy and it doesn’t make any sense,’ ” Golbeck explained.

Companies are taking action, but need to do more

Try using the generative AI platform DALL-E to create a generative AI image of Biden or Trump, and you’ll get a notification telling you the app can’t make such items. You’ll get the same response from Adobe’s Firefly and other mainstream generative AI platforms. As companies roll out new technologies, they’re increasingly putting up roadblocks to keep users from generating potentially dangerous content.

But there are plenty of services on the internet that will let you make those images. And once they’re online, they’re there forever.

That’s where social media companies come in. If there’s any way to help keep generative AI-based disinformation and misinformation offline, social media platforms like Facebook, Instagram, and X (formerly Twitter) will need to take steps to keep users from spreading such content.

Sen. Mike Rounds, R-S.D., left, Senate Majority Leader Chuck Schumer of N.Y., Sen. Todd Young, R-Ind., speak after a bipartisan Senate forum on artificial intelligence, on Capitol Hill, Wednesday, Nov. 8, 2023, in Washington. (AP Photo/Alex Brandon)

Sen. Mike Rounds (R-S.D.), Senate Majority Leader Chuck Schumer, and Sen. Todd Young (R-Ind.) speak after a bipartisan Senate forum on artificial intelligence on Capitol Hill, Nov. 8, 2023, in Washington. (Alex Brandon/AP Photo) (ASSOCIATED PRESS)

Meta, so far, has said that advertisers will need to disclose when they use generative AI to manipulate political ads. The company has also entirely banned advertisers from manipulating ads using Meta’s generative AI tech.

But the company and rival X have also cut back on their election integrity teams, the groups meant to help counter disinformation.

“They should take content moderation very seriously and take down content that is seriously inaccurate,” West said.

“The problem is that in the last year or so, the tech companies have moved in the opposite direction. The guardrails that actually worked … over the last few years are being dismantled and that exposes the public to tremendous risk,” he added.

How they’ll address the risk of generative AI to the 2024 election could be just as impactful as the work the platforms did in tamping down election denial and anti-COVID content. Regardless of what the companies do, the election is coming, and so is the disinformation.

Daniel Howley is the tech editor at Yahoo Finance. He’s been covering the tech industry since 2011. You can follow him on Twitter @DanielHowley.

Click here for the latest technology news that will impact the stock market.

Read the latest financial and business news from Yahoo Finance


Originally Appeared Here

You May Also Like

About the Author:

Early Bird