Election Integrity in the Age of Artificial Intelligence: Lessons from Indonesia
Published
Indonesia’s 2024 election showed the ways Artificial Intelligence can corrode the integrity of future elections. Countries should regulate AI-generated political content across the entire information supply chain and devise strategies for combating disinformation and manipulation.
The 2024 Indonesian elections have highlighted the growing influence of Generative Artificial Intelligence (GenAI) on the democratic process, raising legitimate concerns about their impact on the integrity of future elections. From the circulation of AI-generated content to the use of micro-targeted advertising and personalised messaging, the impact of AI on campaign strategies and voter perceptions and behaviours has become increasingly apparent.
The use of AI in electoral campaigns has evolved significantly since its early applications in the mid-2010s, when AI-driven algorithms were employed by platforms like Facebook for user community-building and targeted political advertising. However, Indonesia’s February 2024 election showcased the potential game-changing impact of the second-generation GenAI-driven models including AI-generated cartoons and deepfakes.
Two deepfakes of presidential candidates that went viral during the election campaign highlight the potential of AI-generated content to influence political narrative, although their actual impact on votes remains uncertain. The first was a video of Prabowo Subianto delivering a speech in Arabic to boost his credibility with Muslim voters, the second was an audio of Anies Baswedan being reprimanded by Surya Paloh, the influential chairman of Nasdem Party, intended to deceive voters and undermine Anies. Our research was conducted through a face-to-face survey by ISEAS-Yusof Ishak Institute and Lembaga Survei Indonesia (LSI); all respondents were shown the deepfake contents by the surveyors. For the Prabowo video, 19 per cent of respondents had previously seen it before the study, and of all respondents, 28 per cent believed the speech occurred. For the Surya Paloh audio, 23 per cent of respondents had previously heard it before the study, and of all respondents who listened to the audio, 18 per cent believed the conversation took place. The results highlight the potential for deepfakes to spread disinformation, as a not insignificant portion of those exposed to the content, both before and during the survey, believed it to be genuine.
Our study also found evidence of selective exposure and selective belief. The likelihood of respondents encountering a disinformation campaign with deepfake content and the likelihood of respondents believing it depends on their partisan inclination. For example, respondents who were more inclined to vote for Anies were less likely to believe in the deepfake audio of Surya Paloh reprimanding Anies, after accounting for the effects of respondents’ gender, age, education, income and religion. Disinformation campaigns with deepfakes have the potential to further polarise society as people are more likely to encounter and believe deepfakes that favour their preferred candidates and disfavour their opponents.
As we stand on the precipice of an AI-driven future, it is imperative that we become ever-vigilant and proactive in protecting the sanctity of our elections.
Another deepfake video that gained significant attention was one depicting former President Suharto delivering a speech encouraging citizens to support the Golkar Party, despite the fact that the former president had already passed away. Remarkably, 14.5 per cent of respondents believed the speech to be authentic, even though the video was likely intended as a messaging tool rather than to deceive. Nevertheless, a significant percentage thought the speech took place.
In addition to deepfakes, GenAI played a notable role in Indonesia’s election through text-to-image cartoons aimed at rehabilitating the tainted image of Prabowo, who won by a landslide. AI-engineered chubby-cheeked cartoon images of Prabowo transformed the imposing and intimidating image of the once infamous ex-military general associated with human rights abuses into a cute and cuddly Gen-Z icon.
The campaign also introduced an AI-powered website and media app by Prabowo’s Digital Team (PRIDE), allowing supporters to generate images of themselves virtually posing with the candidates, creating a sense of closeness and shared connection. However, to access these features, users were required to provide personal information, raising concerns about the potential misuse of such data for “micro-targeting”.
Unlike Indonesia’s previous elections, the 2024 election was perhaps less about blatant use of identity politics, hate speech or slander, and more about sophisticated and subtle information and psychological manipulation — enabled by GenAI and intended to either deceive voters or to be used as satire or political communication tools. The ISEAS-LSI survey found that 60-75 per cent of total respondents could identify deepfake videos/audios, showing that at this point, AI-generated content might still be rather rudimentary, although a considerable minority could still be deceived. However, the effects of sophisticated strategies that less visibly and more subtly affect voters’ psychology, such as the use of cartoon images, are more difficult to judge.
Looking to the future, GenAI will evidently play a transformative role in shaping the electoral landscape, raising concerns about election integrity and social polarization. The rapid advancements in GenAI technologies are already revolutionising the way political campaigns engage with voters in Indonesia.
Drawing insights from the recent Indonesian election, we can anticipate hyper-personalised AI-generated political ads leveraging cutting-edge GenAI technologies. This includes AI-generated deepfakes of candidates delivering meticulously crafted messages tailored to individual voters’ data profiles, preferences, and psychological traits. By leveraging vast troves of voter data from social media, online behaviour, and surveys, campaigns will harness the power of AI to micro-target individuals with bespoke content designed to resonate deeply and sway opinions.
Moreover, AI-powered chatbots with human-like intelligence are likely to become ubiquitous, deployed en masse to engage in one-on-one conversations with voters. These AI “representatives” will adapt their personalities and talking points to optimise effectiveness for each voter, fostering a sense of personal connection while gathering invaluable data and feedback.
A multi-faceted approach is essential to safeguard a fair and representative election.
First and foremost, transparency must be enforced through mandatory labelling of AI-generated political content and ads, empowering voters to make informed decisions. Furthermore, social media companies must be held accountable for their role in the spread of AI-generated political content.
Second, regulations must address the entire information supply chain, from the creation (by seeders) and dissemination (by spreaders) of content to its impact on voters. We need to focus more on addressing disinformation and other manipulated information upstream, including tighter guidelines on deploying GenAI technologies that could turbocharge the spread of such content.
Third, researchers need increased access to private-sector data and algorithms tightly controlled by social media companies to better understand how disinformation and manipulated information affect elections and develop strategies to combat them.
As we stand on the precipice of an AI-driven future, it is imperative that we become ever-vigilant and proactive in protecting the sanctity of our elections. By implementing transparent regulations, promoting ethical AI practices, and empowering voters with critical media literacy skills, we can harness the potential of AI to strengthen democratic engagement while mitigating its risks.
2024/188
Nuurrianti Jalli is a Visiting Fellow at the Media, Technology and Society Programme at ISEAS – Yusof Ishak Institute. She is also a Research Affiliate at the Data and Democracy Research Hub at Monash University, Indonesia, and an Assistant Professor at the School of Media and Strategic Communications at Oklahoma State University.
Maria Monica Wihardja is a Visiting Fellow and Co-coordinator of the Media, Technology and Society Programme at ISEAS - Yusof Ishak Institute, and also Adjunct Assistant Professor at the National University of Singapore.










