Listen to this story
Post the Twitter takeover, Elon Musk announced the formation of a content-moderation council with representatives from diverse backgrounds who would be responsible for major content decisions and account reinstatements happening on the platform.
This has sparked a debate over whether content moderation is actually possible, and how to ensure a fair trade-off between curbing harmful content and allowing free speech. For example, this tweet by Ben Lang, co-founder & executive editor of Road to VR, that expresses disdain over any form of content moderation on social media platforms:
Corporate imposition of moderation on social media platforms has been a position abhorred by many. Unfortunately, there is no clear insight on what are the possible high roads corporations can take, if there are any.
Subscribe to our Newsletter
Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Let’s look at content moderation in Indian social media platforms, and how it differs from moderation when it comes to several global platforms.
Content screening in Indian social media apps
Some of the homegrown social media platforms like Koo, Chingari, Mauj, Tiki, etc. have attracted millions of users in recent times. Just recently, Koo, India’s “twitter alternative”, crossed 50 million app downloads. The growing number of users on these platforms means that they must be in the position to moderate content in their apps.
Sumit Ghosh, CEO & co-founder of Chingari, told AIM, “While political intervention for content moderation can be difficult for social media platforms, it is a practice used in many countries to mitigate the risks of social media.” Chingari uses a HuMachine formula, where the content is first screened by the AI/ML tool, and later reviewed by a team of moderators.
Unlike global platforms like Facebook, Twitter and Instagram, content moderation in Indian social media apps are tailor-made as per the law of the land, and the cultural context of the country. Rajneesh Jaswal, head of policy, Koo, said, “Here, we have an India-centric approach to content moderation.” Certain nuances such as training algorithms to censoring cuss words in different Indian languages, or in visual fronts, avoiding flagging of Indian gods and goddesses for undress are some examples he highlighted.
Therefore, the conversation around content moderation on social media sites in India have been mostly centred around the government bodies that define the law, compared to the other “universal” platforms where questions are posed at corporate moguls who run these platforms, or “independent” bodies set by them.
So, how does being “universal” affect large social media corporations, and specifically, what are the challenges Musk will have to face now that he has branded himself as a harbinger of free speech in social media?
Content moderation—a political problem?
Nicholas Thompson, the CEO of Atlantic, joins the conversation with insights on why content moderation is difficult – more so for Elon Musk, precisely because so much of his wealth is tied to Tesla. Thompson places the debate in a global context and showcases how attempts at content restriction on Twitter can affect Tesla’s business in international borders.
By banning accounts, or flagging certain kinds of content as “misinformation”, he says, Musk may be prohibited from doing business in countries where Twitter’s policies do not benefit power. Thompson cites the example of Apple that had to navigate through compromising terrains because of the business it was doing in China.
Similarly, Nilay Patel, editor-in-chief of The Verge, writes that Musk has brought himself into a very compromisable position with his takeover of Twitter that will ultimately cause damage to his reputation as well as to his other companies. He adds that the problems in Twitter are not engineering problems, but political ones.
Let’s take the case of India. The country decided to stick to its decision on formulating a grievance committee to which users can appeal to if their content is moderated by social media platforms, without having to take the legal recourse. The committee will have the power to reverse the ruling made by social media firms. The vacillating space of power to moderate content between the social media companies and the government also recently led to Twitter taking the Indian government to court. Twitter expressed disapproval of the government’s content-blocking orders on their platform on grounds of them being “too broad and arbitrary”.
Hence, historically it has been a very tumultuous terrain to walk upon, and it will not be any easy for Musk too.
In explaining the troubles the Twitter acquisition will cause Musk, Nicholas Thompson goes on to say that a possible way out for Musk will be to step away from being the “global face of content moderation”. By outsourcing all regulatory decisions to a council, or by distancing himself from the decision-making body at Twitter, Musk will be able to escape the trepidation that will come along with it.
Edward Snowden, formerly at National Security Agency, also comes to similar conclusion in his defense of individual choice when he says, “It’s crazy to me that people think content moderation is a binary between ‘corporate gods must decide for us who is permitted to speak’ or ‘my timeline will be filled with racism and torture videos’. There are other, better alternatives.”
Following is a tweet that captures the alternatives he is talking about here:
Therefore, it is safe to say that moderation is a very non-defined territory, and has to go through levels of political discourses before a judgement on “fair” moderation can be passed.