21st-may-banner design

AI In Charge Of Your Tweets As COVID-19 Forces Remote Work

Share

As the news of COVID-19 hit the shores of the west, the tech companies were quick to take necessary precautions by asking their employees to work from home. And, since the content moderators who flag a tweet or image for its nature, have been asked to stay at home, all major social media companies have announced increased AI-based filtering of content.

Right now, we might be witnessing a pivotal moment in the relation between man and machine. An algorithm will predominantly be used in deciding what to say and what not to. Though the absence of a human moderator can curb bias in the decisions, there can be few accidental deletions of non-malicious content that might put the whole notion of freedom of speech at risk.

The social media platforms have become the megaphones of human thoughts. Twitter especially, has become the go-to platform for seamless information sharing across the globe. From Presidents to army generals, to medical practitioners and armchair critics, Twitter has managed to connect the world in an unprecedented way. So, policy updates are critical. However, this time this policy change has been enforced by an unforeseen event — COVID-19.

Twitter’s policy team led by Vijaya Gadde, have explained in a post, the reasons behind policy updates. 

They have announced that they would be increasing the use of machine learning and automation to take a wide range of actions on potentially abusive and manipulative content. 

However, the team has assured that they will not permanently suspend any accounts based solely on their automated enforcement systems. Instead, they promise to look for opportunities to build in human review checks where they will be most impactful. 

Others Follow Suit

YouTube, earlier this year, had already made their intentions loud and clear as they have halved the number of videos of conspiracy theories. For this, they have relied more on AI to moderate. Now, with remote work being enforced, the use of AI for video review in place of humans has been opted for.

This means videos may be taken down from the site purely because they’re flagged by AI due to potential violation of policy, whereas the videos might normally get routed to a human reviewer to confirm that they should be taken down.

This means automated systems will start removing some content without human review

However, YouTube assures that the creators can always still appeal if a video is taken down, but YouTube warns this process will also be delayed because of the reduction in human moderators.

Whereas Facebook’s CEO, Mark Zuckerberg, in a conference call held on March 18th, answered various questions of how Facebook is adjusting to the recent changes. When Zuckerberg was asked about the impact of making content moderators work from home, Zuckerberg, explained why AI would be playing a key role in flagging content. He cited the emotional health of the remotely working content moderators as one reason. 

The content that usually gets flagged ranges from topics such as suicide to terrorism to fake news. Dealing with this content can be emotionally draining for the moderators. But, machine learning algorithms too, can falter. They can ban something benevolent just because the content matched some checkpoints in the algorithm.

And, the companies are willing to trade-off accidental takedown of good posts for obviously malicious content.

However, the Facebook team is working round the clock to restore content that has been pulled down inappropriately.

The impact of COVID-19 has some irreversible consequences. Markets can recover, companies can be bailed out, but what about policies?

The goal of all the companies is to deploy machine learning to improve customer experiences, constantly pursuing the elusive fairness in models. So far, the moderation has been manual. This sudden change of events has forced the organisations to take up drastic measures, and now we have AI in the loop, which is the end goal.

The goalposts have been moved now and the resolutions have been accelerated. So, if AI shows promising results in judging the content for its policy violations, would we still need human moderators? If AI is given full control of our speech, should we risk algorithmic fairness for freedom of speech?

Share
Picture of Ram Sagar

Ram Sagar

I have a master's degree in Robotics and I write about machine learning advancements.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India