The Théâtre D'opéra Spatial – an AI-generated image that won first place in the Digital Art category at the Colorado State Fair Fine Arts Competition in August 2022. (Image from Art Incarnate)

Rethinking GenAI Visuals in Government Communications

Published

Governments are turning to generative AI to assist in public communications, sparking backlash from the public over the quality and intent of such visuals. A better approach would be to pair GenAI with human expertise.

Government agencies in Singapore and Malaysia have begun integrating generative AI (GenAI) into public communications. These attempts reflect a push from governments in Southeast Asia to adopt new technologies for more efficient citizen engagement. However, there needs to be more nuance in the use of such new methods, given the popular perception that GenAI content is low and inconsistent in quality while also lacking the human touch.

Malaysia’s Ministry of Communications used AI-generated posters to promote Madani Malaysia. The Ministry of Science, Technology, and Innovation also used AI-generated images to commemorate International Nurses Day. Similarly, Singapore’s Ministry of Finance used AI-generated visuals to advertise Budget 2024.

The use of AI-generated images by individuals, companies, and now, government agencies, has been controversial. Proponents view GenAI as a way to streamline creative work and improve efficiency. GenAI programs can quickly generate visuals from text prompts and uploaded images, producing multiple variations of output much faster than traditional manual revisions by a human artist. These programs are also user-friendly and require little training to use effectively.

However, many online users view AI-generated images as “AI slop” — output that is unpolished and of low quality. Singaporean netizens criticised the Ministry of Finance’s AI-generated Budget 2024 visuals as creepy scam advertisements. Likewise, Malaysian netizens mocked their government’s use of AI-generated visuals on social media. They called such visuals ill-conceived and blind chasing of technology trends to cut corners by not paying a real artist.

This dislike is not limited to the public sector. Malaysians criticised Sin Chew Daily for publishing an AI-generated national flag without the Islamic crescent, deeming it offensive and lazy. Similarly, Singaporeans slammed local filmmakers for using AI-generated graphics.

But why is there so much hostility towards AI-generated images? Is it due to their poor quality, or concerns surrounding ethics and authenticity?

Ever since tools like Midjourney became widely available in 2022, netizens have been both impressed and bemused. While such GenAI programs can produce visually striking images, they also sometimes generate strange images with distorted hands or uncanny faces. Despite eventual improvements in consistency, this cemented the popular perception of AI-generated images being weird and inconsistent in quality.

AI-generated images have also garnered a negative reputation online as the preferred tool for hustlers chasing quick profits. Because it is cheaper and easier than hiring a human artist, AI-generated images have been widely used to promote cryptocurrency and non-fungible token (NFT) scams. Content farms also utilise them to bait engagement on social media. Thus, AI-generated images have become closely associated with scams and low-effort content.  

Beyond these negative perceptions, many also object to AI-generated images on ethical grounds. GenAI programs like Midjourney and DALL-E are trained to create images in various styles via vast image databases. This training data includes copyrighted works scraped from the internet from various artists and studios, without permission or compensation. Many GenAI developers also charge subscriptions to access their programs, effectively profiting from stolen content.

Even the award-winning AI-generated image, the Théâtre D’opéra Spatial, required extensive human refinement to reach its final form. This shows that the best results come from combining GenAI with human expertise.

This has sparked lawsuits from corporations like Disney and individual artists against GenAI developers. However, it remains unclear whether using online content to train GenAI programs qualifies as fair use, due to the lack of legal precedent. Thus, GenAI developers like Midjourney continue to operate and profit, with projected revenues reaching US$500 million in 2025.

As a result, many artists have been vocal in their opposition to AI-generated images, urging netizens and fellow creators to boycott GenAI programs. Some have even used programs like Nightshade to upload flawed artwork to “poison” future GenAI training data. While some GenAI developers now claim to license artwork ethically, many critics remain sceptical. They view GenAI’s development as irreconcilably rooted in theft and condemn its existence as unethical.

Lastly, AI-generated images are often viewed as inauthentic compared to human-made artwork. GenAI programs can produce images at an industrial scale. However, critics argue that AI-generated images lack the human touch that gives art its meaning, emotion, and perspective. Because GenAI programs lack human intent and creative control, the phrase “AI art is not real art” is often used online to dismiss AI-generated images as inferior to human-made artwork.

These concerns over quality, ethics and authenticity have fostered strong distaste toward AI-generated images and GenAI programs. Rather than being seen as a tool to support artists, GenAI is viewed as a shortcut for the lazy and dishonest. In Singapore and Malaysia, poor quality control in governments’ use of AI-generated content reinforces this bias. Sloppy visuals quickly out themselves as being AI-generated and reflect poorly on public agencies, potentially eroding the government’s credibility.

So, what can governments do?

Given the strong cultural resistance towards AI-generated content online, changing public perception will be difficult. Instead, governments should focus on exercising rigorous quality control. GenAI programs can support labour-intensive tasks like sketches, mock-ups, composition, and colouring, but each stage requires strict human oversight to rectify errors and ensure quality. Public agencies should not use GenAI to replace illustrators. Instead, they should use AI as another tool to support human artists, the same way Photoshop enhances their work.

Even the award-winning AI-generated image, the Théâtre D’opéra Spatial, required extensive human refinement to reach its final form. This shows that the best results come from combining GenAI with human expertise. Therefore, to effectively integrate AI-generated visuals into public communications, governments should take this approach to ensure efficiently made, high-quality images that should be able to avoid sparking a backlash from the public.

2025/234

Mr Brandon Tan Jun Wen is a Research Officer with the Media, Technology and Society Programme at ISEAS – Yusof Ishak Institute.