MITB Banner

This is the End of Twitter As We Know It

Twitter will soon be (or already is) flooded with misinformation. But is Musk ready to handle that?

Share

This is the End of Twitter As We Know It
Listen to this story

Congratulations, Musk. By all means, Twitter is officially set to be “by far the most accurate source of information about the world”. 

Twitter has been one of the main sources of news for people to rely on. The platform could potentially ace TV channels or magazines that give users zero control over the opinions that they share.

But, what comes along as a catastrophe to this moderation of content is the vast sea of misinformation. If handled inadequately, the bird that Musk freed would probably get lost in this sea. 

It’s certain that advertisers and customers will flee from the platform. Users will eventually get bored of the $8 verified symbol. The authenticity of verified accounts will be lost. With difficulties around selling and monetising services on the platform, Musk must carry on to deliver what was promised. And if there’s one other problem that would be extremely hard to detect, it’d be the AI-based misinformation. 

Twitter will soon (might already) be flooded with misinformation. But is Musk ready to handle that? 

Village of misinformation

Misinformation has existed since day one and everybody knows that. There is no sense of novelty about the inability of humans to assess the source of truth. We don’t even need the best of technologies to spread lies or inaccurate information. However, it’s certain that AI and machine learning systems can enhance the capabilities of malicious information to unprecedented levels. 

Musk has spent this week twisting his new toy; raising questions, suspicions, and clusters of conflicts by its users. One of his main goals was the need to verify all human users by getting rid of the AI-powered “spam bots or die trying”.

At present, misinformation travels faster on Twitter—nearly eight times faster than Meta’s Facebook. “We found that falsehood diffuses significantly farther, faster, deeper, and more broadly than the truth, in all categories of information, and in many cases by an order of magnitude,” says Sinan Aral, Professor of MIT Sloan School of Management. 

Good at generating, but bad at detecting

According to Gary Marcus, the misinformation problem is set to worsen in the coming days. Generating misinformation has become easier than ever. As knockoffs of models such as GPT-3 are getting cheaper and freer to use, the cost of generating misinformation is expected to eventually be zero, thereby leading to the rise in the quantity of misinformation. 

The key issue lies in AI systems—particularly, large language models—that are well-advanced in generating misinformation but are bad at detecting it. 

Furthermore, a group of researchers have created a multilingual large language model that is bigger than GPT-3. The new model called BLOOM (BigScience Large Open-science Open-access Multilingual Language Model) was created by over 1,000 researchers who volunteered in a project called ‘BigScience’. 

Also read, GPT-3 Is Quietly Damaging Google Search

However, the model’s creators warn that it won’t be able to fix the problems around large language models—such as lack of adequate policies on data governance and the tendencies of algorithms to spew toxic content. 

So, in order to combat the pool of misinformation in platforms such as Twitter, the new Chief Twit will need advanced tools for regulation of online content. 

From facing threats of deepfakes to fighting biases on text-to-image models, AIs can generate results without having an actual ground in the real world. New generative models are crippled every other second but the researchers are lagging behind on detecting and finding measures to handle them. 

Recommendation systems are majorly driven by AI algorithms deciding what users expect to see on their feeds. The problem arose with Musk’s recent decision to remove identity verification. The CEO also fired the team that was in charge of algorithmic responsibility at Twitter.

The maelstrom of news and one-sided decisions bring concerns about Twitter and if it will even survive this rampage. 

Gary Marcus’ twitter post said, “If you are serious @elonmusk about trying to make Twitter the most accurate source of information, we should talk. And you need to start by understanding the core technical issue.”

Source: Twitter

Marcus further elaborated that language learning models like GPT-3 are very good at generating misinformation. Truthful questions such as ‘Who really caused 9/11?’ were asked, to which the models replied with a false answer—‘The US government caused 9/11’.

With the pace at which misinformation is getting picked at, Twitter’s existing effort ‘Community Notes’ (Birdwatch)—executed manually by humans—is sure to bite the dust.

Gamble of $8 verified badge

Furthermore, the new blue tick theoretically becomes an effective barrier to large-scale operations on misinformation. 

Until a couple of days ago, the company’s ML, Ethics, Transparency, and Accountability (META) team was in charge of keeping the algorithm under control. It was an initiative to ensure that  fairness and transparency are maintained throughout the social media platform. 

One of the team members says, “We’re building explainable ML solutions so you can better understand our algorithms, what informs them, and how they impact what you see on Twitter. Similarly, algorithmic choice will allow people to have more input and control in shaping what they want Twitter to be for them.”

Meanwhile, a twitter employee tweeted:

The team empowered users to prevent any harm caused by the algorithms. What seemed like reckless behaviour is when Musk axed the entire META team—without any prior notice.

The problem of misinformation existed long before Musk reigned over Twitter. But how will the platform detect this misinformation? Who is taking the responsibility to tailor the recommendation algorithm to ensure that the ‘free speech’ remains accountable? A Twitterati can get the verification badge of honour for $8. But is it not over the cost of true information? 

Share
Picture of Bhuvana Kamath

Bhuvana Kamath

I am fascinated by technology and AI’s implementation in today’s dynamic world. Being a technophile, I am keen on exploring the ever-evolving trends around applied science and innovation.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.