This was on the purported basis of these groups being brazen enough to “steal” political power from the majority population. While some of this content was eventually removed, it came only after widespread circulation and public outcry.
GAPS IN UNDERSTANDING OF CONTENT MODERATION
Relatedly, social media platforms do not disclose what resources they allocate to individual markets, especially relatively smaller ones perceived as unproblematic, like Malaysia.
Resources here include how well-trained the AI models are at detecting issues specific to Malaysia, the number of human moderators dedicated to the country, and the language proficiency of the AI model and human moderators to account for hyperlocal colloquialisms and slang.
For example, none of the existing resources could flag, moderate, and remove the videos calling for a repeat of the May 13 racial riots of 1969, which involved sectarian violence between Malays and Chinese in Malaysia. This is because the date alone, when detached from its historical significance or context, would not suggest that it constitutes hate speech and incitement towards violence in the present day.
Understandably, content removal requests by the government also raised concerns. Fears are that such moves could lead to censorship of political speech, especially against critics of the current administration.
Of greater concern is that removal requests can be made on vague bases, such as infringing broadly applicable legislation, like Section 505(b) of the Penal Code and Section 233(1) of the Communications and Multimedia Act 1998. The former draws the line for free speech at statements bringing about public mischief, while the latter prohibits the improper use of network facilities.