AI In Charge Of Your Tweets As COVID-19 Forces Remote Work

As the news of COVID-19 hit the shores of the west, the tech companies were quick to take necessary precautions by asking their employees to work from home. And, since the content moderators who flag a tweet or image for its nature, have been asked to stay at home, all major social media companies have announced increased AI-based filtering of content.

Right now, we might be witnessing a pivotal moment in the relation between man and machine. An algorithm will predominantly be used in deciding what to say and what not to. Though the absence of a human moderator can curb bias in the decisions, there can be few accidental deletions of non-malicious content that might put the whole notion of freedom of speech at risk.

The social media platforms have become the megaphones of human thoughts. Twitter especially, has become the go-to platform for seamless information sharing across the globe. From Presidents to army generals, to medical practitioners and armchair critics, Twitter has managed to connect the world in an unprecedented way. So, policy updates are critical. However, this time this policy change has been enforced by an unforeseen event — COVID-19.

Twitter’s policy team led by Vijaya Gadde, have explained in a post, the reasons behind policy updates. 

They have announced that they would be increasing the use of machine learning and automation to take a wide range of actions on potentially abusive and manipulative content. 

However, the team has assured that they will not permanently suspend any accounts based solely on their automated enforcement systems. Instead, they promise to look for opportunities to build in human review checks where they will be most impactful. 

Others Follow Suit

YouTube, earlier this year, had already made their intentions loud and clear as they have halved the number of videos of conspiracy theories. For this, they have relied more on AI to moderate. Now, with remote work being enforced, the use of AI for video review in place of humans has been opted for.

This means videos may be taken down from the site purely because they’re flagged by AI due to potential violation of policy, whereas the videos might normally get routed to a human reviewer to confirm that they should be taken down.

This means automated systems will start removing some content without human review

However, YouTube assures that the creators can always still appeal if a video is taken down, but YouTube warns this process will also be delayed because of the reduction in human moderators.

Whereas Facebook’s CEO, Mark Zuckerberg, in a conference call held on March 18th, answered various questions of how Facebook is adjusting to the recent changes. When Zuckerberg was asked about the impact of making content moderators work from home, Zuckerberg, explained why AI would be playing a key role in flagging content. He cited the emotional health of the remotely working content moderators as one reason. 

The content that usually gets flagged ranges from topics such as suicide to terrorism to fake news. Dealing with this content can be emotionally draining for the moderators. But, machine learning algorithms too, can falter. They can ban something benevolent just because the content matched some checkpoints in the algorithm.

And, the companies are willing to trade-off accidental takedown of good posts for obviously malicious content.

However, the Facebook team is working round the clock to restore content that has been pulled down inappropriately.

The impact of COVID-19 has some irreversible consequences. Markets can recover, companies can be bailed out, but what about policies?

The goal of all the companies is to deploy machine learning to improve customer experiences, constantly pursuing the elusive fairness in models. So far, the moderation has been manual. This sudden change of events has forced the organisations to take up drastic measures, and now we have AI in the loop, which is the end goal.

The goalposts have been moved now and the resolutions have been accelerated. So, if AI shows promising results in judging the content for its policy violations, would we still need human moderators? If AI is given full control of our speech, should we risk algorithmic fairness for freedom of speech?

Download our Mobile App

Ram Sagar
I have a master's degree in Robotics and I write about machine learning advancements.

Subscribe to our newsletter

Join our editors every weekday evening as they steer you through the most significant news of the day.
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.

Our Recent Stories

Our Upcoming Events

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox

6 IDEs Built for Rust

Rust IDEs aid efficient code development by offering features like code completion, syntax highlighting, linting, debugging tools, and code refactoring