Malaysia’s AI Flag Fiascos: The Need for Digital Governance
Published
Recent incidents involving AI-generated depictions of the Malaysian national flag underscore the need for capacity building to instil AI literacy.
In April, Malaysia faced a string of controversies involving AI-generated misrepresentations of its national flag, the Jalur Gemilang, each omission triggering national outrage, official apologies, and government probes. These incidents, though seemingly technical, strike at the intersection of national identity, political legitimacy, and digital governance. They expose structural gaps in how Malaysia is adapting to the realities of generative artificial intelligence (AI).
The most high-profile incident involved Sin Chew Daily, which printed a front page image omitting the Islamic crescent from the Jalur Gemilang during Chinese President Xi Jinping’s state visit. The omission was striking, given the crescent’s symbolism as Islam’s representation in the Malaysian federation. Although Sin Chew quickly apologised and suspended two editors, the damage was done. Within days, Kwong Wah Yit Poh, another Chinese-language daily, repeated the error on its front page, a Singaporean company released a promotional video featuring an incomplete flag, and most concerning of all, the Ministry of Education issued an official report containing a flawed version of the national emblem.
What ties these incidents together is clear — AI-generated content escaped human scrutiny. This time, however, the multiple entities made the same mistake almost simultaneously. In each case, whether a newsroom, private vendor or government ministry, AI tools were employed to expedite or enhance visual production. But these systems, shaped by data and design choices rooted in dominant cultural contexts, often reflect a kind of cultural ignorance. This rendered the systems unable to reliably recognise or reproduce national symbols like the crescent in the Jalur Gemilang, particularly when such emblems are underrepresented in training datasets. More critically, it was the failure of human screening to catch these issues before public release that led to the high-stakes blunders.
The public and government responses were swift and forceful. Thirteen police reports were lodged against Sin Chew. Malaysian King Sultan Ibrahim Sultan Iskandar condemned the mistake, while the Home Ministry and Malaysian Communications and Multimedia Commission launched formal investigations under laws protecting national symbols. The Prime Minister’s Office stressed that all parties, whether public or private, would be held equally accountable. In short, the state would treat all flag-related errors, whether AI-generated or not, with the same severity as potential acts of sedition.
This heavy-handed reaction, however, goes beyond defending the flag. In Malaysia’s multiracial, multi-religious context, the flag represents not just sovereignty but also the political balance enshrined in the Constitution. The omission of the crescent, whether accidental or AI-induced, easily becomes politicised, particularly when committed by a Chinese-language newspaper. Critics framed it as an act of subversion, even “treason“. In a fragile political climate, such mistakes are quickly weaponised. This explains the zero-tolerance stance by Prime Minister Anwar Ibrahim’s unity government.
Ultimately, the flag fiascos are a stress test for Malaysia’s digital maturity. They highlight not only the cultural sensitivity required in deploying AI but also the urgent need for institutional reforms to govern new technologies responsibly.
Yet the incidents also reveal a broader concern. Malaysia’s institutions are still in the early stages of adapting to the demands of AI governance. While AI adoption is accelerating across government, media, and business, frameworks for oversight, verification, and training remain patchy or non-existent. The Ministry of Education’s error is especially telling. How did an official document containing a clearly flawed AI-generated flag pass through so many layers of bureaucracy? The likely explanation lies in both a digital literacy gap and systemic weaknesses in content auditing. Just as crucial is the challenge of visual plausibility: AI-generated images, especially those resembling photographs, can escape notice, even by human editors.
Recognising these challenges, the authorities have to double down on the need for “human judgment” in reviewing AI content, even as recent incidents reveal how such judgment can fail. This renewed emphasis not only highlights a lack of trust in automated systems, but also a deeper institutional lag. Malaysia still lacks a cohesive, cross-sectoral policy guiding AI use in public communication. Most agencies remain reactive — when problems arise, blame is assigned and apologies follow. Proactive safeguards, AI content vetting protocols, red-flag detection tools, and systematic cross-checking mechanisms remain rare. The recent establishment of the National AI Office and the launch of the National Guidelines on AI Governance and Ethics mark promising steps forward. However, it remains to be seen whether these initiatives will translate into stronger governance and improved digital literacy across ministries and media institutions to prevent similar incidents.
Adding to the complexity is a governance dilemma. Calls for strict enforcement risk created a chilling effect on media and innovation. Press watchdog GERAMM, for instance, has warned against “extreme” penalties that could stifle creativity or reinforce political intimidation, especially in cases like Sin Chew. A punitive approach alone will not solve the underlying problems. Malaysia should defend national dignity without criminalising technological error. This means shifting from enforcement towards capacity building. This would involve training civil servants and media personnel in AI literacy, establishing internal review units, and developing clear ethical standards for AI-generated content.
Ultimately, the flag fiascos are a stress test for Malaysia’s digital maturity. They highlight not only the cultural sensitivity required in deploying AI but also the urgent need for institutional reforms to govern new technologies responsibly. The government must act not only to protect national symbols but also to strengthen its systems and ensure that future “AI accidents” do not undermine public trust.
These incidents are not merely footnotes in the country’s AI journey. They are early warning signals. If Malaysia is to harness AI while preserving its democratic and multicultural values, it must move swiftly from reactive outrage to systemic readiness.
2025/168
Nuurrianti Jalli is a Visiting Fellow at the Media, Technology and Society Programme at ISEAS – Yusof Ishak Institute. She is also a Research Affiliate at the Data and Democracy Research Hub at Monash University, Indonesia, and an Assistant Professor at the School of Media and Strategic Communications at Oklahoma State University.









