Deepfakes Get Weaponized in the Gaza War

The war on information is growing. The battle between Israel and Hamas has pushed it into a new realm.

Samuel Greengard, Contributing Reporter

November 10, 2023

5 Min Read
Israel vs Hamas concept flags on a wall with a crack. Hamas and Israel political conflict, economy, war crisis, relationship, trade concept
Ruma Aktar via Alamy Stock

At a Glance

  • Groups promoting one ideology or another have tapped AI to generate fake images and videos that circulate online.
  • The ability to generate false content is escalating, but flagging and preventing its spread is time-consuming and difficult.
  • Forensics experts say it’s next to impossible to monitor sites and services and keep up with the assault of fake images.

Wars have always been fought with the underlying idea of capturing the hearts and minds of the public. Propaganda has long been a weapon in convincing people that a country or group’s actions -- and sometimes atrocities -- are acceptable and justified.

Yet, the war in Gaza has pushed the concept to a new level. Groups promoting one ideology or another have tapped artificial intelligence to generate fake images and videos that circulate online. “We have seen a large number of AI-generated images,” says Hany Farid, a professor at the UC Berkeley School of Information and a leading expert on digital forensics and AI.

Indeed, one widely shared image showed soccer fans in a stadium in Madrid, Spain holding a giant Palestinian flag. It was immediately deemed fake by digital forensics experts. Another video depicted a so-called “crisis actor” pretending to be seriously injured in a hospital bed one day and fully recovered the next. However, the images showed two different people.

On the other hand, a graphic image of a charred body -- presumably a child -- was dismissed by many online as an AI-generated image, though experts such as Farid say it was real. Although it isn’t entirely clear whether these false and manipulated images are swaying public opinion, it is apparent that the shockwaves are rippling through society.

Related:Musk, the EC Warning, and ‘Inception’-Levels of Disinformation

“If anything can be fake, then nothing has to be real,” Farid explains. “Users are growing suspicious of everything they read and see online. This makes it difficult to reason about a fast moving and complex world.”

Calling the Shots

A constant bombardment of false news, fake images, and deepfake videos is taking a toll. Trust in news media, social media sites, and other institutions is in decline. Overall, 68% of US adults say made-up news and information greatly impacts their confidence in institutions and 63% report that fake or altered videos and images create a great deal of confusion surrounding current issues and events.

“Even a small number of fake content tends to pollute the entire information ecosystem, resulting in real content being flagged as potentially fake,” states Siwei Lyu, a professor in the Department of Computer Science and Engineering at SUNY University at Buffalo and Director of the UB Media Forensic Lab. In addition, “Anyone can claim real media as fakes if they are not consistent with their opinions.”

AI generators that produce fake images and videos include open-source applications such as Midjourney, Stable Diffusion, DeepSwap, and Open AI’s DALL-E. Voice clones used for fake audios come from the likes of ElevenLabs, while video-based tools used to generate deepfakes derive from a variety of open-source AI tools, Farid says.

Related:The Upside to Deepfake Technology

While the ability to generate convincing false content is escalating, flagging the content and preventing its spread is time-consuming and difficult. Forensics experts typically examine images and videos for artifacts -- unnatural shadows, unusual shapes and lines, and other incorrect elements.

Farid, for example, has developed a specialized suite of tools that automate parts of the analysis process, though many images and videos require manual inspection. “No tools are perfect but when they are combined, they are fairly effective at spotting fake and altered images,” Farid says.

Image Is Everything

Of course, the Israeli-Hamas war is simply the latest ideological battleground in an ongoing information war. Fake imagery has already been used to sway public opinion in the war in the Ukraine, and deepfakes are now common in politics, business, and beyond. In fact, Forrester has warned enterprises that deepfake scams involving fraud, stock price manipulation, reputation and brand, and employee well-being are now realistic threats.

Lax moderation policies and often-incorrect labeling at social media sites like X (formerly known as Twitter) haven’t made things any easier. Digital forensics experts say it’s next to impossible to monitor sites and services and keep up with the constant assault of fake images -- which are intentionally or inadvertently spread by media pundits, social media influencers, and cultural warriors looking to manipulate thinking.

Related:ChatGPT: An Author Without Ethics

Publicly available tools that promise to identify fake images and videos produce mixed results. For instance, a free AI image detector from a company called Optic, AI or Not, labeled the image of a burnt corpse from the war as AI-generated, but Farid’s analysis found that it shows no signs of being fake. He says that widely available deepfake detection tools often fail -- and they usually don’t provide any insight into how they arrive at decisions.

Getting to more advanced detection methods is difficult, Lyu says. One detection approach that he and other researchers are studying uses machine learning to identify a suspicious subset of images from a group of videos, images, and audio. He says the approach currently displays a 90% accuracy rate -- though post-processing and compression used by social media sites cause the figure to drop.

Digital watermarking technology could also help identify fake imagery and mitigate its spread, though watermarks can be intentionally manipulated, damaged, or removed, Lyu says.

Still another idea, and one that could prove useful for established media outlets, is to extract statistical features -- unique signatures embedded in the underlying data -- from photos and videos. This data can be stored in a blockchain in the cloud. It’s then possible to authenticate a photo or video as it appears at different sites.

On the Firing Line

Meanwhile, the Israel-Hamas war drags on and new deepfakes are increasing intermingled with real videos -- though sometimes from a different event or war. Increasingly, experts have to decipher events through reverse image searches and other methods.

Ultimately, Lyu is concerned that before society can arrive at reliable methods for identifying deepfakes, we may hit a tipping point where the volume of fake content outpaces society’s ability to detect and diffuse it and the line between reality and fiction becomes entirely blurred. For example, AI company Accrete in conjunction with Bloomberg recently reported that five accounts aligned with Hamas attempted to discredit actual videos of the war.

Concludes Lyu: “In the not-too-distant future, ordinary users may have access to more easy-to-use and ready-made tools to manipulate media as they use Photoshop to edit images today. Attacks might become more granular, specific, and potentially more effective.”

About the Author(s)

Samuel Greengard

Contributing Reporter

Samuel Greengard writes about business, technology, and cybersecurity for numerous magazines and websites. He is author of the books "The Internet of Things" and "Virtual Reality" (MIT Press).

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights