AI Made Friendly HERE

AI deepfakes are more dangerous than you think

With elections taking place around the world, most notably in the US in November, concerns grow around how AI deepfakes are becoming ubiquitous online.

Ever since the rush to build the most advance artificial intelligence models began with ChatGPT’s debut, people have been taken through a plethora of emotions – from the initial curiosity and then awe at how far it’s come, to laughing at how silly it can be and the very real anxiety around how it could replace many jobs in the future.

But alongside these emotions there have also been growing concerns around how advancements in the technology, particularly generative AI, is setting the stage for a not-so-distant future that could leave even the writers of Black Mirror wanting.

Earlier this year, OpenAI unveiled Sora, its text-to-video model that can create vivid, almost dream-like visuals based on text prompts from a user. While not available to the public yet, similar AI technologies are being used to create other, more worrying, forms of video – deepfakes.

Centre stage

Tim Callan, chief compliance officer at Sectigo, told SiliconRepublic.com earlier this year about the rising sophistication of AI deepfakes, which may have been easier to spot in the past but now are getting increasingly sophisticated to distinguish from real content.

Callan said there are many ways this type of AI-generated content could be used on politicians or political parties that “changes the impression that the average voter is going to have” on them. “Usually, this is something defamatory,” he said. “They want to make that politician look bad. But it could be the opposite. There was a rather famous deepfake here in the US during the New Hampshire primary, where a deepfake voice of Joe Biden was being used.”

Just last week, AI deepfakes found itself centre stage in public discussion yet again when former US president and presidential candidate Donald Trump shared multiple fake images on Truth Social – his version of X – including one showing musician Taylor Swift with the caption “Taylor wants you to vote for Donald Trump’”. Swift has not publicly endorsed any candidate for the upcoming US elections in November.

Trump even shared a fake image of fellow presidential candidate candidate Kamala Harris speaking to a crowd at the Democratic National Convention in Chicago with a large Soviet Union flag among the crowd, who are all wearing similar military uniforms. Last month, Elon Musk – now a major supporter of Trump – also shared a fake campaign video of Harris which used an AI-generated version of her voice.

“It’s alarming to see the rise of deepfake technology now being used to mimic news anchors and politicians to spread misinformation,” Callan said.

“People don’t realise how far AI deep fake technology has come and how democratised the technology is. Unfortunately, anything about your physical appearance can be replicated, ie eyes, face, voice. This is no longer something that only exists in films, as more people are now capable of creating convincing deepfakes.”

No Fakes

A few weeks ago, a group of US senators introduced a No Fakes Act to make the creation of voice and visual likenesses of people, such as AI deepfakes, illegal without their consent.

The bill, endorsed by OpenAI, IBM, Disney and the Screen Actors Guild-American Federation of Television and Radio Artists (SAG-AFTRA), aims to hold individuals or companies liable for damages for producing, hosting or sharing AI deepfakes of people in audiovisual content that they “never actually appeared in or otherwise approved”.

This means that the person or group committing the crime will have to take down the deepfake after receiving a notice from the victim. It excludes documentaries and biographical works, or for the purposes of criticism and parody, under US First Amendment protections.

Callan thinks that AI deepfakes will become a “mainstream component” of phishing and social engineering attacks, to the extent that they will make global headlines.

“Our ability to trust the genuine nature of any apparent recording of reality, such as an image, video or audio file, will be completely destroyed,” he said. “Unfortunately, the public’s understanding of this complete loss of reliability in previously trusted media types will lag behind reality, and many people will become victims of scams as a result.”

According to Callan, as the technology landscape “dramatically” changes, so must people’s mindsets when consuming media. “[We] must now exercise more caution than ever in what [we] watch and reconsider the validity of the source and its trustworthiness.”

Find out how emerging tech trends are transforming tomorrow with our new podcast, Future Human: The Series. Listen now on Spotify, on Apple or wherever you get your podcasts.

Originally Appeared Here

You May Also Like

About the Author:

Early Bird