Since the Oct. 7 attack by Hamas militants against Israel, it’s become near-impossible to avoid images of the carnage and devastation that has befallen Israel and the Palestinians living in the besieged Gaza Strip. Online, it’s already difficult to sift through the barrage of misinformation, recycled footage from past conflicts, and contradictory narratives to ascertain what is really happening on the ground — and the emerging technology of artificial intelligence is adding a whole new layer of complexity to the problem.
AI-generated images and video related to the ongoing conflict are running rampant on social media. Among the flood are ham-handed attempts at agitative propaganda, hate-fueled memes targeting Jewish people, and intentionally crafted efforts to deceive the public.
“To the extent that people continue to trust information from unvetted sources, [AI] massively exacerbates the existing problem,” Nathaniel Sharadin, a professor at Hong Kong University and fellow at the Center for AI Safety, tells Rolling Stone. “Without a doubt the lowered bar for producing very credible but fake content will just generate more, and higher quality, image, audio, and video fakes, and that people will encounter much, much more of it. But we don’t know what the result of that will be,” he adds, calling the rise of virtually unregulated artificiall intelligence tools an “experiment on ourselves.”
We are already seeing that experiment play out in real time. One of the most widely disseminated instances of misinformation spread via AI-generated media was a claim that President Joe Biden would be opening the U.S. Selective Service draft to women. The claim resurfaced amid the Israel-Hamas conflict via a deepfake video created in February, which depicts President Biden announcing he will revive the draft. “Remember,” fake-Biden says, “you are not sending your sons and daughters to war. You are sending them to freedom.”
Editor’s picks
The video, and other baseless rumors that women would be drafted into the military, went so viral that they generated a satirical TikTok trend by women imagining what it would be like to serve in the military. Thousands of videos, many of them racking up millions of views, were attached to hashtags like #WomenDraft.
Another AI deepfake video, falsely depicting climate activist Greta Thunburg advocating the use of sustainable military technology and “biodegradable missiles,” garnered millions of impressions on Twitter after being shared by figures like Pizzagate conspiracy theorist Jack Posobiec. Despite a small watermark hinting that the digitally altered video was “satire,” slews of users in the comments reacted credulously to the clip, and others struggled to discern if it was real.
Hanaya Naftali, an Israeli influencer who formerly handled social media as a member of Israel Prime Minister Benjamin Netanyahu’s communications team, caused similar confusion when he posted what he claimed were photos of Hamas leaders “living luxurious lives.” Because the images had the kind of blurring sometimes produced by AI models — and Naftali himself has a history of bending the truth, as when he falsely claimed earlier this month that he had been called up for active IDF duty — many accused him of faking the content.
But, as Forbes reported, the photos were authentic, although Naftali had filtered them through an AI “upscaler” tool in an attempt to improve their resolution. The resulting images, which have been viewed more than 20 million times on X (formerly Twitter), look like they could have been created with a text prompt in a program like DALL-E or Stable Diffusion. And that’s exactly what Naftali’s critics assumed.
Related
Dear Palestinians,
While the leaders of Hamas are living luxurious lives enjoying good lives, they ask you to sacrifice yourselves and your children.
Hamas doesn’t care for the Palestinians. Hamas is the enemy of the Palestinian people.
الى الفلسطينيين،
في الوقت الذي به قادة… pic.twitter.com/l30I0CDLcw
— Hananya Naftali (@HananyaNaftali) October 20, 2023
Other deceptive AI illustrations are being used to garner sympathy for either Gazans or Israelis, or suggest the resilience and solidarity of either group. AI-spawned images have conjured up fake crowds of Israelis marching through the street, waving flags and cheering from the windows of buildings in huge numbers, presumably in support of their government. One image apparently shared by a “Wartime Media” Telegram channel shows a nonexistent refugee camp for Israelis.
And though many authentic, heart-rending photos and videos have come out of the Gaza Strip as Israel continues to rain bombs on the region, various pictures making the rounds are actually AI creations that play on the emotions of viewers by placing children in dire circumstances. Some of it is labeled as AI art — an illustration of a Palestinian girl holding a teddy bear as a fire rages behind her, for example. But the posts that gain traction, like this AI-generated image of a toddler watching an apartment explode around him, are typically presented as real.
Making matters worse, misleading images are occasionally amplified by sources that seem trustworthy. Tunisian journalist Muhammad al-Hachimi al-Hamidi recently shared an image of smiling Palestinian children covered in ash, the ruins of a neighborhood behind them. The uncanny picture appears in no actual media outlet — only on various social accounts.
The plague of AI-derived war content goes beyond misinformation and includes more directly hateful kinds of propaganda. Because squadrons of Hamas fighters attacked Israel via paragliders, neo-Nazis have adopted the vehicle as a symbol glorifying the murder of Jews, using AI art models to paint Hamas gliders getting the drop on antisemitic caricatures. One paraglider meme has even made it onto a T-shirt design in a Nazi e-commerce shop.
Palestinian supporters were also outraged by an Instagram user who posted concept art for a massive, gleaming new theme park on the Gaza Strip. “I present to you the new tourist and vacation city in the south that is going to be built soon in Israel: Nova,” he wrote, adding an Israeli flag emoji. Replying to a commenter who asked, “When can you buy real estate there?” he answered, “Right now they are clearing the ground.”
Around the world, tensions have likewise been stoked with bogus imagery purporting to show how other nations have signaled their alignment with either Israel or Palestine. The city of Paris was said to have lit the Eiffel Tower with the colors of the Israeli flag on Oct. 8 — but these were doctored photos, and when the structure was lit with Israel’s colors the following night, it looked considerably different. A manipulated video led to false claims that theIsraeli flag could be seen on the exterior of the Las Vegas Sphere, leading representatives for the concert venue to deny that they had arranged such a display. Another phony picture showed fans of the Spanish soccer team Atletico Madrid holding up a gigantic Palestinian flag; fact-checkers debunked it, noting the image was likely AI-generated.
Between meager hopes for an imminent ceasefire in Gaza and the ease with which people can create and spread AI art that furthers their geopolitical agenda, this element of the misinformation scourge isn’t going anywhere. Paradoxically, heightened awareness of it can lead internet users to not only catch counterfeit images but disbelieve real ones, further muddling the contested narratives of the ongoing war. Even AI image detectors may wrongly discredit a genuine photo as fake.
“There’s nothing contrary to the [terms of service of] many distribution platforms about posting fake images of this sort–even ones that are unlabeled,” Sharadin says. “The decision to flag, remove, and otherwise restrict access to this content is down to large platform providers such as Google, Meta, X, etc.”
Sharadin adds that social media platforms ability to combat AI related misinformation and abuse correlated pretty directly with the two factors that most heavily influence their general enforcement policies: “resources” and “will.”
Trending
While some improbenemnts have been made, there are still not many effective end-user tools to help individuals quickly spot AI generated content. Techniques for spotting the bot, such as digital fingerprinting, or watermarking, etc., only work (and many times they simply don’t work) to enable large model developers to credibly say that a piece of content was not generated using their model,” Sharadin tells Rolling Stone. “They do not enable end-users to be able to tell whether some piece of content was in fact generated by a model. But the fact that a developer can say ‘not my model’ doesn’t help the public at all: what we want is a way to say whether something is generated by a model or by a human!”
Meanwhile, social media moderation teams, inundated with sensitive content that may need to be flagged or removed, have virtually no recourse in managing the problem. Apart from X’s Community Notes, which allows users to label an image as AI-made, you’d be hard pressed to find any platform adding that context — and this system is failing to match pace with known misinformation and propaganda. By the time anyone has confirmation of an AI picture, it’s often far too late, as thousands could have seen and reposted it. The manipulator behind it, of course, is already on to their next project.