This photo taken on 21 November 2024 in Manila shows a Facebook 'military interest' page that misrepresented old photos and videos of army operations to falsely claim that the US was helping the Philippines prepare for war. (Photo by JAM STA ROSA / AFP)

Artificial Intelligence is Intensifying South China Sea Disputes in the Philippines

Published

AI-generated propaganda is being used to distort reality and disinform the public, even gathering support for military escalation in already troubled waters.

The South China Sea, known for its rich fisheries, key shipping lanes, and potential energy reserves, remains a vital geopolitical flashpoint. The territorial dispute between the Philippines and China persists despite diplomatic agreements, with both countries engaging in military standoffs and public warnings. Increasingly sophisticated propaganda driven by artificial intelligence (AI) further exacerbates tensions.

AI has revolutionised political propaganda, enabling states to manipulate public perception on an unprecedented scale. A Freedom House report highlights AI-driven disinformation campaigns in 16 countries that were used to “sow doubt, smear opponents, or influence public debate”. In the Philippines, AI-generated media is regularly misused for scams and disinformation.

In July 2024, a deepfake video falsely depicting Philippine President Ferdinand Marcos Jr ordering an attack on China went viral. The AI-generated audio closely mimicked Marcos’ voice, causing panic. The Presidential Communications Office (PCO) swiftly identified it as fake. A subsequent investigation revealed foreign actors as the culprits, prompting officials to warn against AI-driven disinformation.

While this incident was high-profile, it was not an isolated case. YouTube channels like PH TV leverage AI and traditional video and audio manipulation to spread false narratives, such as the depiction of US military action in the disputed waters. The channel includes disclaimers labelling this type of content as “entertainment”. However, the authors’ yet-to-be-published research found that many viewers accepted the misinformation as fact. This was reflected in the comments section, where discussions were highly polarised with strong anti-China sentiment and unwavering support for US intervention. The intensity and volume of such responses indicate that many viewers did not question the videos’ claims but amplified them as truth.

While the actual operators of PH TV remain unknown, the channel appears to be either Chinese-sponsored or an opportunistic entity exploiting contentious issues for engagement. Multiple reports by PressOne.PH, a Filipino media outlet, revealed that Chinese state media has disseminated AI-generated videos as part of its ongoing narrative battle with the Philippines.

Beyond video manipulation, China also employs “cognitive warfare” utilising AI-assisted personas to shape public perception. For example, journalists Meng Zhe and Xu-Pan Yiru of China Daily have acknowledged using AI to adjust their speech, claiming it helps make their accents more intelligible to audiences. Observers, however, remain sceptical, viewing these AI-driven enhancements as part of a broader strategy to refine propaganda and strengthen China’s influence on international discourse.

A report by The Graphika on Operation Naval Gazing uncovered a network of fake accounts, some with AI-generated profile pictures, promoting pro-China rhetoric and amplifying favourable narratives about former Philippine president Rodrigo Duterte (particularly those supporting his arguments for stronger Chinese regional influence).

The stakes are particularly high in the China-Philippines maritime dispute, where local disinformation campaigns sometimes undermine the country’s position for financial gain. Agence France-Presse uncovered a coordinated network of Facebook pages and YouTube channels masquerading as legitimate news sources while generating ad revenue through AI-powered propaganda. Military-focused pages were found to have manipulated old footage of joint exercises to suggest the US is actively preparing for war in the region.

AI has revolutionised political propaganda, enabling states to manipulate public perception on an unprecedented scale.

Further investigation linked these efforts to a central content manager, revealing that each misleading article earns between US$20 and US$70, with the network collectively amassing over 10 million followers. Analysts such as Kenton Thibaut, senior fellow at Washington’s Digital Forensic Research Lab, and Albert Zhang of the Australian Strategic Policy Institute indicate that while the network’s direct ties to state actors remain uncertain, its content frequently aligns with China’s stance on the dispute.

This steady stream of AI-generated disinformation distorts reality and could fuel public confusion and fear. A study by the PCO found that 51 per cent of Filipinos struggle to identify fake news, with nine in 10 encountering difficulties navigating digital information. AI-generated propaganda could exploit these vulnerabilities, deepening societal divisions and, in some cases, even rallying public support for military escalation despite the Philippines being unprepared for war.

To combat AI-driven disinformation, the Philippine Congress is filing House bills addressing AI’s role in media manipulation, particularly before the 2025 elections. The proposed legislation aims to establish an overarching regulatory framework and impose legal consequences for deepfake-related activities.

While China lacks AI-specific disinformation laws, its regulations mandate clear labelling of AI-generated content and adherence to state-sanctioned narratives. Lacking enforceable AI policies, the Philippines has resorted to diplomatic protests against China. Within President Marcos’ first six months in office, the Department of Foreign Affairs filed over 130 protests, to little effect.

In response to growing tensions, journalists have also begun joining Philippine missions to disputed waters as part of the government’s “transparency initiative”, which provides real-time accounts of events to counter disinformation. While this initiative aims to promote transparency, it also raises concerns about journalistic independence. By relying on government access to disputed areas, journalists may face implicit pressure to align with national narratives, potentially compromising their objectivity.

AI-driven propaganda in the South China Sea dispute is still emerging, but rapid technological advancements and escalating regional tensions suggest it will only grow in influence. Mitigating its impact requires a coordinated effort. Policymakers must establish stronger regulations to hold malicious actors accountable, while tech companies should invest in AI-driven detection tools and enhance transparency in algorithmic decision-making. Civil society and media literacy advocates should equip the public with critical thinking skills through targeted education and accessible verification tools. Given the cross-border nature of digital misinformation, international cooperation will be key to maintaining information integrity.

However, these efforts face mounting challenges as major social media platforms scale back on fact-checking. Meta’s withdrawal from fact-checking in the region and its recent discontinuation of third-party fact-checking in the US remove a key safeguard against disinformation. X (formerly Twitter) has also replaced professional fact-checking with its community-driven “Community Notes”, which has been criticised for inconsistencies and delays. With these platforms shifting to decentralised moderation, misinformation risks growing unchecked. To counter this, regional actors must urgently invest in independent fact-checking networks and strengthen local verification initiatives to prevent AI-fuelled disinformation from destabilising the region.

2025/66

Nuurrianti Jalli is a Visiting Fellow at the Media, Technology and Society Programme at ISEAS – Yusof Ishak Institute. She is also a Research Affiliate at the Data and Democracy Research Hub at Monash University, Indonesia, and an Assistant Professor at the School of Media and Strategic Communications at Oklahoma State University.


Angel Martinez is a Manila-based culture writer, consumer researcher, and content strategist. Her work on the internet, identity, and their intersections has been published in VICE, Vox, Dazed, Business Insider, and Rest of World.