Singapore Unveils Measures to Require Social-media Firms to Moderate Content
Urupong/iStock/Getty Images Plus
Article audio sponsored by The John Birch Society

SINGAPORE — Social-media companies such as Facebook, TikTok, and Twitter would face increased accountability to Singaporean users as the country seeks to unveil a series of online safety measures such as content filters and user reporting tools to restrict the availability of harmful material to its residents. 

These social-media platforms would be required under Singapore’s new set of proposed internet rules to enforce content-moderation processes and safety standards. The aims of these new laws include reducing users’ exposure to a range of certain online content including terrorist propaganda and anti-religious vitriol.

In short, the platforms have to limit users’ exposure or disable access to egregious content once users report it. Singapore’s Ministry of Communications and Information (MCI) reassured industry groups that the country’s approach will be outcome-based.

“Designated social media services will be given some flexibility to develop and implement the most appropriate solutions to tackle harmful online content on their services, taking into account their unique operating models,” the MCI said in a statement.

Also, the MCI added that it will mandate the platforms to have user-friendly and effective user reporting mechanisms. The best course of action in response to any particular form of harmful content must be undertaken “in a timely and diligent manner,” MCI suggested. The ministry did not define a particular takedown timeframe.

Social-media platforms will also have to publish yearly accountability reports on the efficacy of their safety measures. These reports should entail metrics reflecting the extent of prevalence of harmful content on these platforms, user reports they received and responded to, and lay down their processes to deal with harmful content.

Tools that enable parents and guardians to protect their children from unsolicited and risky connections on social media, as well as filters that limit what can be viewed online, would have to be used by default for local users under 18 years of age. The MCI proposed that these tools will conceal young users’ account profiles and the posts they upload from public viewing. Besides, these tools will restrict who can interact with young users’ accounts. Platforms ought to caution young users and parents of the potential risks, should they opt to weaken the settings.

Such measures are to limit children’s exposure to unsuitable content and avoid experiences such as harassment and online stalking. The aims are to reduce users’ risk of exposure to child pornography as well as other forms of sexual content and abuse. Any content that promotes sexual violence, child sexual exploitation and abuse, as well as terrorism, has to be removed.

Notably, the new codes consider Singapore’s unique context pertaining to sensitive issues such as race and religion. Provocative content or incidents that could incite racial or religious hostilities will be banned.

Social-media platforms will have to promote helpline numbers and counseling services to users who search for high-risk content, such as topics associated with self-harm and suicide. The MCI also suggested that tools for users to address their own exposure to unwanted content and interactions be in place to permit users to conceal unwanted comments on their feeds and restrict their interaction with other users.

Also, Singapore’s Infocomm Media Development Authority (IMDA) will be granted the mandate to instruct any social-media service to censor controversial accounts or block access to harmful content for users in Singapore. Content threatening public health and security, self-harm, or posts that endanger racial or religious dynamics in Singapore fall under the banner of harmful content.

These measures that fall under the Code of Practice for Online Safety and the Content Code for Social Media Services garnered support among the Singaporean public, following a month-long consultation that ended in August 2022.

Parents, youth, academics, community, and industry groups formed the 600+ participants in the consultation.

“Overall, respondents generally supported the proposals,” said Minister for Communications and Information Josephine Teo in a Facebook post.

Teo’s comments were echoed by the MCI, which said the public generally concurred that social-media platforms such as Facebook, Instagram, and Tik Tok have to enforce safety standards for six types of content: sexual content, violent content, self-harm content, cyber-bullying content, content that threatens public health, and content that incites vice and organized crime.

Among the various topics broached during the public consultation was the issue of social-media platforms’ perceived lackluster response to user reports on harmful content.

Some respondents complained that social-media platforms failed to take down objectionable content they reported. Others felt that such platforms had to be more accountable, and called for platforms to update users on the decisions made after user reports. Respondents also requested to be permitted to appeal against dismissed reports. 

Additionally, respondents called for social-media platforms to authenticate users’ age to better protect children from harmful content as well as to enforce compulsory safety tutorials.

In response to these suggestions, the MCI said, “We will continue to work with the industry to study the feasibility of these suggestions as we apply an outcome-based approach to improving the safety of young users on these services.”

During the same public consultation, industry groups petitioned for flexibility on the timelines for social-media firms to ban flagged content. To this, MCI replied, “The timeline requirements for social media services to comply with the directions will take into account the need to mitigate users’ exposure to the spread of egregious content circulating on the services.”

Teo added, “Many also highlighted that regulations need to be complemented by public and user education to guide users in dealing with harmful online content, and engaging with others online in a safe and respectful manner.”

After a Parliament debate, the codes are slated to be implemented as early as 2023, amid incidents in the past few years that saw a slew of offensive online content that rankled certain segments of Singaporean society.

For example, a Facebook page called NUS Atheist Society posted an image of the Bible and the Quran in 2020 with an accompanying caption that read, “For use during toilet paper shortages.”

In response to such egregious content, Singapore’s Ministry of Home Affairs (MHA) said, “Online hate speech on race and religion has no place in Singapore.”