Listen to this story
Gonzalez v. Google and Twitter v. Taamneh were two lawsuits that came up for hearing in the Supreme Court of the United States last week. In essence, the lawsuits accused Twitter and Google of assisting Islamic State attacks. The judgment will prove to be a landmark in deciding whether web providers can be held responsible for hosting unlawful posts, particularly if they encourage it through algorithmic recommendations.
However, social media companies in the US are protected from these lawsuits under Article 230, which shields internet service providers and social media platforms like YouTube, Facebook, and Twitter etc, from third-party (user) content, like a hate speech video, that may be in violation of the law.
As per Article 230, “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.“
Sign up for your weekly dose of what's up in emerging technology.
Although, a lot of people feel that this rule needs to be revised because the safeguards outlined in Article 230 of the 1996 Communications Decency Act (CDA) were created more than 25 years ago, in an era of ‘naive technological optimism’ and limited technological capabilities.
For instance, Mark Zukerberg once told the United States Congress that Facebook would benefit from clearer supervision from elected officials and that it could make sense for there to be accountability for some of the information. In a similar vein, US President Joe Biden told The New York Times during his poll campaign that Article 230 should be “revoked, immediately”.
Download our Mobile App
However, Google believes that undercutting the Article will be more of a problem than a solution. As per the statement given by Google to The Verge, “Through the years, YouTube has invested in technology, teams, and policies to identify and remove extremist content. We regularly work with law enforcement, other platforms, and civil society to share intelligence and best practices.”
As per Google, “Undercutting Article 230 would make it harder to combat harmful content — making the internet less safe and helpful for all of us.”
Where does India stand?
India, with respect to the ongoing Article 230 tussle between social media giants and the US government, stands well prepared. India has Section 79 of Indian IT Act, 2000 as the equivalent of Article 230 of the USA. As per Section 79, any social media intermediary will not be subject to legal action for any third-party information, data, or communication link made available or hosted by them.
In essence, it means that a social media platform won’t be held accountable for any legal actions if it merely serves as a bridge to transmit messages from one person to another without any kind of interference.
However, the Act also states that failing to promptly remove or disable access to unlawful material on that platform without tainting the evidence in any way when the particular social media platform is made aware of any illegal activity occurring over the platform through notification from a government or government agency, can be a ground for legal prosecution.
The social media platforms will be required to remove any “misinformation” or illegal content, as well as content that encourages animosity between various groups on the basis of religion or caste with the intention of inciting violence, within 72 hours of being flagged, according to IT minister Rajeev Chandrashekher.
Should recommendation engines be regulated?
None of these regulations, however, deal with the problem of recommendation systems. According to the recent Gonzalez v. Google lawsuit, the defendant asserts that Google knowingly promoted Islamic State propaganda that allegedly inspired a 2015 attack in Paris, providing material assistance to an unlicensed terrorist organisation.
According to recent events, it’s possible that the court may order social media platforms like YouTube to regulate their recommendation systems. But can a regulatory body impose culpability on the recommendation systems used by social media platforms?
China, for example, has introduced regulations on recommendation engines under which “algorithm operators will have to update their technology to comply with new technical requirements, from auditing keywords to enabling users access to and control over their personal data profiles.”
Additionally, operators are also forbidden from employing algorithms for a variety of illegal actions, such as enacting anti-competitive acts and engaging in pricing discrimination, and will have to modify the direction of their recommendation algorithms in order to adhere to “mainstream norms.”
However, the tech ecosystem in China is very distinct from that in the rest of the world. What it is capable of cannot be advanced with nations like India or the US. Governments and tech corporations would need to collaborate to come up with a plan that would allow platforms to identify hazardous content before recommendation algorithms do.