Extremist groups are delving into artificial intelligence, particularly generative AI, to amplify their propaganda efforts. Experts are alarmed by the potential of generative AI to undermine Big Tech’s strides in content moderation. Tech Against Terrorism’s Adam Hadley expresses concern, stating,
“If terrorists start using gen AI to manipulate imagery at scale, this could well destroy hash-sharing as a solution.”
For years, Big Tech relied on hashing databases to swiftly remove known extremist content. However, Tech Against Terrorism is now identifying around 5,000 instances of AI-generated content weekly. This includes content from groups associated with Hezbollah and Hamas, shaping the narrative on the Israel-Hamas conflict.
Hadley predicts a looming threat to automated technology, saying,
“Give it six months or so, the possibility that [they] are manipulating imagery to break hashing is really concerning.”
Recent findings by Tech Against Terrorism reveal extremist use of AI extends to neo-Nazi messaging channels, far-right guides on memetic warfare, and the Islamic State providing tech support for generative AI tools.
The risks aren’t limited to image manipulation. Tech Against Terrorism’s report highlights autotranslation tools converting propaganda into multiple languages and the creation of personalized messages for online recruitment. Hadley believes AI can also be a proactive tool against extremists.
In a bid to counter the emerging threat, Tech Against Terrorism is partnering with Microsoft to develop a gen AI detection system. Hadley states, “We’re confident that gen AI can be used to defend against hostile uses of gen AI.” This initiative coincides with the Christchurch Call Leaders’ Summit, a movement aimed at eradicating online terrorism.
The collaboration aims to enhance the capabilities of smaller platforms lacking AI research centers. Hadley stresses the importance for these platforms, saying, “Even now, with the hashing databases, smaller platforms can just become overwhelmed by this content.”
The reach of AI generative content extends beyond extremist groups. The Internet Watch Foundation reports a surge in AI-generated child sexual abuse material on the dark web, with over 20,000 images identified in a month, indistinguishable from real ones. The threat posed by generative AI tools demands a concerted effort to safeguard online spaces.