A sign is posted in front of Meta headquarters on 28 April 2022 in Menlo Park, California. (Photo: Justin Sullivan / Getty Images via AFP)

Content Moderation of Social Media in Southeast Asia: Contestations and Control

Published

There is an increasing need for content moderation on social media, which puts pressure on regional governments and tech companies to reach new understandings about mediating online content to serve the political and social good.

Content moderation of social media involves screening user-generated content (UGC) to determine its appropriateness. Content moderation has become indispensable, without which social networks would be inundated with harmful and objectionable content. Despite the continued attractiveness of Open Internet ideals, the online world has changed drastically from the early freewheeling days of maximum free speech with minimum limitations. Today, technological giants like Meta and Twitter are content arbitrators, deciding what stays online and what goes and/or suspending, blocking, or removing social media accounts. Tech companies in Southeast Asia walk a difficult tightrope, especially when dealing with political out-of-bounds (OB) markers from regional governments enacting legislation pressuring the former to censor public speech deemed “harmful to society”.

The challenge social networks face is determining what content is objectionable enough to be removed while protecting users’ expectations of being free to express themselves and engage with others. As the digital public sphere is increasingly where many political discussions reside and public opinion affects political fortunes, it is not surprising that contestations occur as various parties battle to control content moderation. 

For example, the Indonesian Minister of Communication and Information Technology recently introduced Ministerial Regulation 5 (MR5) on Private Electronic System Operators, which requires all companies providing online services, businesses, and platforms in the country to comply with content removal orders within 24 hours. This law aims to protect the public from unlawful “prohibited content”. In urgent situations involving potential cases of terrorism and child sexual abuse, or content that may cause “unrest in society or disturbs public order”, the timeframe is only four hours. Tech companies will face fines if they fail to comply with government requests; repeated non-compliance may result in blocking access and even criminal sanctions. Google has agreed to comply, as well as other platforms, including TikTok.

Vietnam is planning to introduce similar laws that require social media networks to remove “illegal content and services” within 24 hours and active “illegal live streams” within three hours. Content that harms national security interests must be blocked immediately. Non-compliance could result in online platforms being banned. The proposed changes reportedly stem from the government’s unhappiness with the current levels of acquiescence with its content removal requests. Data from Vietnam’s communications ministry showed that in the first quarter of 2022, Facebook’s compliance rate was at 90%, Google at 93%, and TikTok at 73%. Nevertheless, Hanoi seems to want 100% compliance and set even stricter standards for the social media companies.

In Thailand, social networks and users must grapple with strict lèse majesté laws in the criminal code and Computer Crime Act (CCA). In 2020, Facebook claimed it was “compelled” to accede to the government’s request to block access to the anti-establishment “Royalist Marketplace” group. Nevertheless, it planned to legally challenge the demand, stating that such requests “contravene international human rights law and have a chilling effect on people’s ability to express themselves”. This has not deterred the Thai government from subsequently acting against Facebook (now Meta) and Twitter for failing to comply with court orders and Thai law to block illegal content and suspend accounts. This is the first time the government has targeted social media companies; previously, it focused on websites, account owners and users. 

Establishing an independent third-party oversight board or international arbitration council in each jurisdiction can be useful for discussing and evaluating content moderation dilemmas and adjudicating complaints or grievances from users or governments.

On the other hand, tech giants in the Philippines and Myanmar face pressure from different stakeholders – civil society groups, journalists, and academics – demanding tightening content moderation policies to reduce the circulation of harmful content such as fake news and propaganda. For example, a consortium of researchers analysed YouTube, Facebook, and Twitter and found five indicators of “networked political manipulation” in relation to the 2022 Philippines presidential elections. Meanwhile, President Ferdinand “Bongbong” Marcos Jr. has been accused of approaching Cambridge Analytica to “rebrand” his family image on social media, fuelling criticisms of “historical revisionism”.

Rohingya refugees have filed a landmark $150 billion class-action lawsuit in California and London against Meta Platforms Inc., arguing that Facebook’s design and failure to sufficiently moderate UGC on its platform have contributed to hate speech violence against their community. This comes in the wake of a United Nations investigation in 2018 that revealed how the spread of hate speech on Facebook had played a decisive role in the possible genocide of the Rohingya population.

Tech giants, therefore, face contesting demands from stakeholders in different markets over their control of the online public sphere. As corporate citizens, they ought to obey the law of the jurisdictions in which they operate. However, this might conflict with their corporate values on issues such as human rights and free speech, which includes protecting their users’ fundamental rights to express themselves on these platforms. They also must contend with those who deliberately create and circulate harmful content and propaganda that sow confusion, social discord, and violence to manipulate democratic processes. 

Expectations that a universal set of community guidelines on content moderation could exist are probably utopian. In reality, there will always be contestations of values and ideologies. To address such issues, digital platforms can form a consortium with industry partners, government agencies, civil society, and online users to proactively develop policies, rules, and regulatory frameworks. Establishing an independent third-party oversight board or international arbitration council in each jurisdiction can be useful for discussing and evaluating content moderation dilemmas and adjudicating complaints or grievances from users or governments. Such governance measures can provide a benchmark of accountability and transparency in the content moderation policies and enforcement decisions of social media companies. 

2022/221

Pauline Leong is Associate Professor, Department of Communication, School of Arts, Sunway University.