As the news of COVID-19 hit the shores of the west, the tech companies were quick to take necessary precautions by asking their employees to work from home. And, since the content moderators who flag a tweet or image for its nature, have been asked to stay at home, all major social media companies have announced increased AI-based filtering of content.
Right now, we might be witnessing a pivotal moment in the relation between man and machine. An algorithm will predominantly be used in deciding what to say and what not to. Though the absence of a human moderator can curb bias in the decisions, there can be few accidental deletions of non-malicious content that might put the whole notion of freedom of speech at risk.
The social media platforms have become the megaphones of human thoughts. Twitter especially, has become the go-to platform for seamless information sharing across the globe. From Presidents to army generals, to medical practitioners and armchair critics, Twitter has managed to connect the world in an unprecedented way. So, policy updates are critical. However, this time this policy change has been enforced by an unforeseen event — COVID-19.
Twitter’s policy team led by Vijaya Gadde, have explained in a post, the reasons behind policy updates.
They have announced that they would be increasing the use of machine learning and automation to take a wide range of actions on potentially abusive and manipulative content.
However, the team has assured that they will not permanently suspend any accounts based solely on their automated enforcement systems. Instead, they promise to look for opportunities to build in human review checks where they will be most impactful.
Others Follow Suit
YouTube, earlier this year, had already made their intentions loud and clear as they have halved the number of videos of conspiracy theories. For this, they have relied more on AI to moderate. Now, with remote work being enforced, the use of AI for video review in place of humans has been opted for.
This means videos may be taken down from the site purely because they’re flagged by AI due to potential violation of policy, whereas the videos might normally get routed to a human reviewer to confirm that they should be taken down.
This means automated systems will start removing some content without human review
However, YouTube assures that the creators can always still appeal if a video is taken down, but YouTube warns this process will also be delayed because of the reduction in human moderators.
Whereas Facebook’s CEO, Mark Zuckerberg, in a conference call held on March 18th, answered various questions of how Facebook is adjusting to the recent changes. When Zuckerberg was asked about the impact of making content moderators work from home, Zuckerberg, explained why AI would be playing a key role in flagging content. He cited the emotional health of the remotely working content moderators as one reason.
The content that usually gets flagged ranges from topics such as suicide to terrorism to fake news. Dealing with this content can be emotionally draining for the moderators. But, machine learning algorithms too, can falter. They can ban something benevolent just because the content matched some checkpoints in the algorithm.
And, the companies are willing to trade-off accidental takedown of good posts for obviously malicious content.
We’ve restored all the posts that were incorrectly removed, which included posts on all topics – not just those related to COVID-19. This was an issue with an automated system that removes links to abusive websites, but incorrectly removed a lot of other posts too.
— Guy Rosen (@guyro) March 18, 2020
However, the Facebook team is working round the clock to restore content that has been pulled down inappropriately.
The impact of COVID-19 has some irreversible consequences. Markets can recover, companies can be bailed out, but what about policies?
The goal of all the companies is to deploy machine learning to improve customer experiences, constantly pursuing the elusive fairness in models. So far, the moderation has been manual. This sudden change of events has forced the organisations to take up drastic measures, and now we have AI in the loop, which is the end goal.
The goalposts have been moved now and the resolutions have been accelerated. So, if AI shows promising results in judging the content for its policy violations, would we still need human moderators? If AI is given full control of our speech, should we risk algorithmic fairness for freedom of speech?