Synthetic Threats: The Potential Misuse of Artificial Intelligence for Extremist Propaganda in Southeast Asia
Published
Recent examples highlight the potential ways extremists could leverage artificial intelligence to create and distribute harmful content, underscoring the serious governance and security challenges facing regional governments in the digital age. As technology continues to outpace the development of laws and policies in many areas, managing these threats becomes increasingly difficult.
The rapid advancement of artificial intelligence (AI) technology has ushered in a new era of digital capabilities, transforming modern life. However, this technological revolution has also created new avenues for extremist groups to spread propaganda and recruit followers, including in Southeast Asia. This emerging trend presents significant challenges for security and online safety in a region already grappling with complex political and religious dynamics.
Extremist organisations, including the Islamic State (IS) and its affiliates, are increasingly leveraging AI tools to enhance their online presence and messaging capabilities. One notable example is the creation of AI-generated videos featuring synthetic spokespersons delivering extremist content. These videos can be produced quickly and at a relatively high quality, allowing for increased output of propaganda materials. As AI technology continues to advance, the growing ability to create convincing digital avatars and deepfakes of real individuals adds a new layer of complexity, making it increasingly challenging for viewers to distinguish between genuine and fake content.
In Southeast Asia, where internet penetration is high, and social media usage is widespread, the potential for AI-enhanced extremist content to reach and influence audiences is particularly concerning. Countries like Indonesia, Malaysia, and the Philippines are dealing with various extremist movements, and the introduction of AI tools could potentially amplify these existing challenges.
AI-generated videos have appeared on social media depicting Jemaah Islamiyah (JI) leaders and the Bali bombers detailing their involvement in terror incidents. One TikTok video features an AI-generated portrayal of the late Dr. Azahari Husin, the Malaysian bomb-maker behind the Bali (2002, 2005) and Jakarta (2003, 2004) bombings, explaining his role in jihad alongside JI, which had ideological and financial ties to al-Qaeda. This video, originally posted in 2023 apparently by an Indonesian content creator, received over 3.8 million views, 120,000 likes, and more than 2,000 shares on TikTok. Another TikTok video showcases an AI-generated depiction of Noordin Mohamad Top, a Malaysian JI leader responsible for the 2009 Jakarta hotel bombings, providing an account of his actions. This video, also originally published in 2023, received 3.8 million views and 90,500 likes. While these videos are clearly AI-generated, they can still captivate viewers and elicit psychological responses, potentially increasing the risk of radicalisation and spreading extremist ideologies.
Figure 1. Screengrab from an AI-generated video featuring key JI members posted on TikTok

Source: TikTok @1001.kisah
In a study published by the Combating Terrorism Center at West Point, researchers highlighted the potential misuse of AI technology, especially generative AI, for extremist purposes, emphasising the significant risks involved. The study found that popular AI models like OpenAI’s ChatGPT-4 and 3.5, Google’s Bard, Nova, and Perplexity could be manipulated by extremists who discover the “right prompts”, including those known as “jailbreaks”, to generate persuasive content that promotes extremist ideologies. Among these models, Perplexity was the most susceptible to such manipulation, showing the highest responsiveness to ”jailbreak-ed” prompts. (In this study, “jailbreak” refers to written phrases designed to bypass an AI model’s ethical safeguards and to extract prohibited information.)
The potential misuse of AI technology for extremist purposes goes far beyond generating simple content; it extends into more sophisticated areas like personalised interaction and engagement. A particularly concerning possibility is the creation of AI-driven chatbots designed to mimic deceased or incarcerated militants. Generative AI has already been used to ”reincarnate” dead extremists in TikTok videos and the next step could involve creating personalised chatbots. These AI tools could engage with vulnerable individuals on encrypted platforms like WhatsApp or Telegram, gradually building relationships and subtly steering them toward extremist ideologies or even inciting acts of violence. The use of messaging apps for extremism is not a new phenomenon in Southeast Asia. During the rise of Daesh (or ISIS) in the region in 2017, Indonesia went as far as banning Telegram to curb the spread of harmful propaganda and prevent the further dissemination of extremist ideas within society.
The region’s diversity means that extremist content could be tailored to specific ethnic or religious groups, potentially exacerbating existing tensions.
The rapid development and open-source nature of many AI models have outpaced regulatory efforts, raising concerns about the responsibility of technology companies and governments’ ability to effectively monitor and control the use of these technologies for malicious purposes. In Southeast Asia, where regulatory frameworks for emerging technologies are still taking shape, this presents a significant challenge. In countries like Malaysia, Indonesia, and the Philippines, there are no specific AI laws yet in place, only guidelines that aim to govern this technology. While these guidelines are promising, their non-legally binding nature leaves room for potential misuse. Moreover, the varying levels of technological infrastructure and expertise across Southeast Asian countries create disparities in their ability to address AI-enhanced extremism. While more developed nation-states like Singapore may have the resources to invest in advanced monitoring and countermeasures, other countries in the region may struggle to keep pace with these rapidly evolving threats.
The unique cultural, linguistic, and political landscape of Southeast Asia adds complexity to the issue of potential AI-enhanced extremism. The region’s diversity means that extremist content could be tailored to specific ethnic or religious groups, potentially exacerbating existing tensions. Additionally, the varying levels of political stability and governance across the region create different environments in which extremist ideologies could take root and spread. As AI technology continues to evolve, its intersection with extremism and online propaganda presents a complex and dynamic challenge. The region’s rapid digital transformation and diverse socio-political landscape make it particularly vulnerable to AI’s misuse for extremist purposes. Understanding and addressing this issue requires careful consideration of the technological, social, and cultural factors at play.
2024/277
Nuurrianti Jalli is a Visiting Fellow at the Media, Technology and Society Programme at ISEAS – Yusof Ishak Institute. She is also a Research Affiliate at the Data and Democracy Research Hub at Monash University, Indonesia, and an Assistant Professor at the School of Media and Strategic Communications at Oklahoma State University.
Irma Garnesia is a Jakarta-based researcher affiliated with Project Multatuli. She previously worked for Tirto.id and holds an MA in Media and Communication from TU Ilmenau, Germany.










