Advertisement

This is the End of Twitter As We Know It

Twitter will soon be (or already is) flooded with misinformation. But is Musk ready to handle that?
This is the End of Twitter As We Know It
Listen to this story

Congratulations, Musk. By all means, Twitter is officially set to be “by far the most accurate source of information about the world”. 

Twitter has been one of the main sources of news for people to rely on. The platform could potentially ace TV channels or magazines that give users zero control over the opinions that they share.

But, what comes along as a catastrophe to this moderation of content is the vast sea of misinformation. If handled inadequately, the bird that Musk freed would probably get lost in this sea. 

It’s certain that advertisers and customers will flee from the platform. Users will eventually get bored of the $8 verified symbol. The authenticity of verified accounts will be lost. With difficulties around selling and monetising services on the platform, Musk must carry on to deliver what was promised. And if there’s one other problem that would be extremely hard to detect, it’d be the AI-based misinformation. 

Twitter will soon (might already) be flooded with misinformation. But is Musk ready to handle that? 

Village of misinformation

Misinformation has existed since day one and everybody knows that. There is no sense of novelty about the inability of humans to assess the source of truth. We don’t even need the best of technologies to spread lies or inaccurate information. However, it’s certain that AI and machine learning systems can enhance the capabilities of malicious information to unprecedented levels. 

Musk has spent this week twisting his new toy; raising questions, suspicions, and clusters of conflicts by its users. One of his main goals was the need to verify all human users by getting rid of the AI-powered “spam bots or die trying”.

At present, misinformation travels faster on Twitter—nearly eight times faster than Meta’s Facebook. “We found that falsehood diffuses significantly farther, faster, deeper, and more broadly than the truth, in all categories of information, and in many cases by an order of magnitude,” says Sinan Aral, Professor of MIT Sloan School of Management. 

Good at generating, but bad at detecting

According to Gary Marcus, the misinformation problem is set to worsen in the coming days. Generating misinformation has become easier than ever. As knockoffs of models such as GPT-3 are getting cheaper and freer to use, the cost of generating misinformation is expected to eventually be zero, thereby leading to the rise in the quantity of misinformation. 

The key issue lies in AI systems—particularly, large language models—that are well-advanced in generating misinformation but are bad at detecting it. 

Furthermore, a group of researchers have created a multilingual large language model that is bigger than GPT-3. The new model called BLOOM (BigScience Large Open-science Open-access Multilingual Language Model) was created by over 1,000 researchers who volunteered in a project called ‘BigScience’. 

Also read, GPT-3 Is Quietly Damaging Google Search

However, the model’s creators warn that it won’t be able to fix the problems around large language models—such as lack of adequate policies on data governance and the tendencies of algorithms to spew toxic content. 

So, in order to combat the pool of misinformation in platforms such as Twitter, the new Chief Twit will need advanced tools for regulation of online content. 

From facing threats of deepfakes to fighting biases on text-to-image models, AIs can generate results without having an actual ground in the real world. New generative models are crippled every other second but the researchers are lagging behind on detecting and finding measures to handle them. 

Recommendation systems are majorly driven by AI algorithms deciding what users expect to see on their feeds. The problem arose with Musk’s recent decision to remove identity verification. The CEO also fired the team that was in charge of algorithmic responsibility at Twitter.

The maelstrom of news and one-sided decisions bring concerns about Twitter and if it will even survive this rampage. 

Gary Marcus’ twitter post said, “If you are serious @elonmusk about trying to make Twitter the most accurate source of information, we should talk. And you need to start by understanding the core technical issue.”

Source: Twitter

Marcus further elaborated that language learning models like GPT-3 are very good at generating misinformation. Truthful questions such as ‘Who really caused 9/11?’ were asked, to which the models replied with a false answer—‘The US government caused 9/11’.

With the pace at which misinformation is getting picked at, Twitter’s existing effort ‘Community Notes’ (Birdwatch)—executed manually by humans—is sure to bite the dust.

Gamble of $8 verified badge

Furthermore, the new blue tick theoretically becomes an effective barrier to large-scale operations on misinformation. 

Until a couple of days ago, the company’s ML, Ethics, Transparency, and Accountability (META) team was in charge of keeping the algorithm under control. It was an initiative to ensure that  fairness and transparency are maintained throughout the social media platform. 

One of the team members says, “We’re building explainable ML solutions so you can better understand our algorithms, what informs them, and how they impact what you see on Twitter. Similarly, algorithmic choice will allow people to have more input and control in shaping what they want Twitter to be for them.”

Meanwhile, a twitter employee tweeted:

The team empowered users to prevent any harm caused by the algorithms. What seemed like reckless behaviour is when Musk axed the entire META team—without any prior notice.

The problem of misinformation existed long before Musk reigned over Twitter. But how will the platform detect this misinformation? Who is taking the responsibility to tailor the recommendation algorithm to ensure that the ‘free speech’ remains accountable? A Twitterati can get the verification badge of honour for $8. But is it not over the cost of true information? 

Download our Mobile App

Bhuvana Kamath
I am fascinated by technology and AI’s implementation in today’s dynamic world. Being a technophile, I am keen on exploring the ever-evolving trends around applied science and innovation.

Subscribe to our newsletter

Join our editors every weekday evening as they steer you through the most significant news of the day.
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.

Our Upcoming Events

15th June | Online

Building LLM powered applications using LangChain

17th June | Online

Mastering LangChain: A Hands-on Workshop for Building Generative AI Applications

Jun 23, 2023 | Bangalore

MachineCon 2023 India

26th June | Online

Accelerating inference for every workload with TensorRT

MachineCon 2023 USA

Jul 21, 2023 | New York

Cypher 2023

Oct 11-13, 2023 | Bangalore

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
MOST POPULAR

Is Sam Altman a Hypocrite? 

While on the one hand, Altman is advocating for the international community to build strong AI regulations, he is also worried when someone finally decides to regulate it