Attendees await the start of a seminar on how to identify and fight fake news, organised by the Thai Media Fund at Major Cineplex Ratchayothin in Bangkok, Thailand, on March 9, 2021. (Photo: Chanat Katanyu / Bangkok Post via AFP)

The Postings of My Father: Tradeoff Between Privacy and Misinformation

Published

End-to-end messaging apps in Thailand present a dilemma to policymakers. The use of encryption raises the level of security but at the same time exposes users to information maladies.

In August, the Atlantic Council’s Digital Forensic Research Lab (DFRLab) released a report shedding light on the use of end-to-end encrypted (E2EE) messaging apps in the United States, such as Whatsapp, Telegram, and WeChat. While these apps safeguard users’ privacy, they also provide a breeding ground for information maladies such as misinformation, disinformation and hate speech. In Thailand, DataReportal placed Line, another well-known E2EE application, as the second-most used platform in Thailand with a 90.7 per cent adoption rate among the country’s internet users.

Previously, I have raised concerns about E2EE messaging apps amidst the May 2023 elections in my op-ed, speculating that while the Election Commission of Thailand (ECT) was active in combating dis/ misinformation during the campaigning season, it was very likely that false information was still circulated in a form of encrypted messages. The encrypted nature of these messages presents challenges for tracking and prevention. Studies of elections in the United States — one published in 2019 and another in 2020 — have highlighted the susceptibility of older adults to dis/ misinformation. In Thailand, the same dynamics are believed to apply, but there are no detailed studies of how dis/ misinformation operates in the online domain. To gain anecdotal insights into how information maladies operate on E2EE apps, I monitored my father’s Line chats for a week in November. My father is a retired civil servant in his 60s. While not a representative sample, my father and his friends belong to the elderly demographic and are more conservative in outlook. They are known to be more susceptible to online misinformation (that said, the author is not presupposing that only older adults are susceptible to information maladies).

In the monitored week, my father received around 350 messages, with nearly 40 per cent containing various forms of dis/ misinformation. As shown below, health-related misinformation was prevalent, surpassing political misinformation. Particularly alarming was the rise of religious-based narratives, especially Buddhist supremacist themes and superstitions. This trend echoes concerns raised by scholars about the fragility of interfaith relations between the Buddhist and Muslim communities in Thailand. If left unchecked, such narratives — particularly those relating to religion and race — can have a significant impact on society and even heighten the chance of communal violence and violence extremism.

Note: Figure compiled by the author based on the data collected from father’s Line chats

A significant portion of dis/ misinformation messages in his chats originated from social media platforms, primarily Facebook and TikTok. This cross-sharing highlights how content, even harmful ones, spread in the digital ecosystem. It appears that such content can spill over from public spaces like social media to personal spaces like messaging apps. Moreover, as widely posited in media studies, digital KPIs such as reach and engagement metrics can serve as heuristic cues that boost the perceived credibility and veracity of content; thus, increasing the likelihood of such content being shared. The images below show the screenshots of problematic contents from my father’s chats and how they were shared from TikTok and Facebook to a messaging app like Line.

Note: The first and second images showed anti-Muslim and pro-Buddhism as well as superstitious messages that originated from Facebook posts. The third image is from a TikTok account called “What’s Pheu Thai Party acting/ pretending to do today?” The account frequently posts political dis/ misinformation and conspiracies against the political party.

Lastly, despite the potentially harmful consequences, the ability for content monitoring on E2EE apps is incredibly low even from the platform’s vantage point. This is due to the oft-cited trade-off between privacy and information maladies that underlies these applications. While Line does have content moderation guidelines that prevent the sharing of obscene, offensive, or false content, they are only applied to public accounts known as Openchats or VOOMs. This means that private chats are largely unmonitored and serve as breeding grounds for dis/ misinformation. This has significant implications on the dynamics of information maladies in a country like Thailand, where users prefer to create a group chat by inviting multiple friends and family to join in a private chat. For example, my father is a member of a private chat in which he exchanges messages and daily greetings with 150 school friends. The group exemplifies the “Sawaddee Chao Wan Jan” or “Good Monday morning” online culture widely popular among elderly netizens.

The anecdotal evidence presented underscores the dual nature of E2EE technology. While it secures data privacy, it also increases the likelihood of exposure to harmful content. Moreover, unlike social media, there is barely —if any —possibility for third-party intervention.

According to Line’s Q&A page, a group chat can be as large as 500 members, making it a goldmine for potential practitioners of dis/ misinformation if left unattended. That said, the application reported that it had deleted over 20 million posts for violating its guidelines in FY2022. It is also worth mentioning that it has a user-reporting mechanism that allows users to report harmful content; however, the effectiveness of such mechanisms is debatable. This requires extra legwork on the users’ part and it is unlikely that one would report content that aligns with their existing beliefs or is shared by their family or peers. This is also demonstrated in the work of Tandoc, Lim, and Ling where they studied the reactions of 2,501 Singaporean social media users to fake news. They found that people are unlikely to correct fake news that they came across if they come from their family and friends.  

The anecdotal evidence presented underscores the dual nature of E2EE technology. While it secures data privacy, it also increases the likelihood of exposure to harmful content. Moreover, unlike social media, there is barely — if any — possibility for third-party intervention. This raises the need for comprehensive studies on the dynamics of dis/misinformation within messaging apps to comprehend their societal impact fully. This is pertinent, now that Meta has rolled out end-to-end encryption as a default setting on Facebook and Instagram Messenger.

2023/300

Surachanee Sriyai is a Visiting Fellow at ISEAS - Yusof Ishak Institute, and a Lecturer and Digital Governance Track Lead at the School of Public Policy, Chiang Mai University.