Facebook has reportedly come up with an AI tool for news summarisation called TL;DR that will convert long news articles into small snippets.
The name of the tool TL;DR stands for Too Long; Didn’t Read – an abbreviation commonly used on the internet to comment on articles that were too long to read. The aim of the app is to make the consumption of such articles easier.
Several organisations have come up with different formats for fast consumption of news — could be in the form of bullet points, infographics or short videos, among others. However, previously they were all summarised or developed by humans.
Thus, the use of AI by Facebook to summarise news has raised several concerns. These are especially relevant since Facebook has previously not been able to successfully stop the spread of misinformation on their platforms.
Previous efforts by Facebook
Earlier this year, Sophie Zhang, a whistleblower who previously worked at Facebook published a 6,600-word memo explaining how elections in foreign countries were being manipulated using inauthentic assets or scripted activity on the platform as well as how a lack of enough resources at Facebook led to this not being controlled.
During the US presidential elections in 2016, the platform was blamed for influencing the elections due to the spread of misinformation on the platform. As a matter of fact, Facebook’s news feed could not detect fake news efficiently.
Even this year, amid the pandemic, Facebook failed to control the spread of medical misinformation on the platform. In fact, a report from the US-based non-profit activism group, Avaaz found that the spread of medical misinformation on Facebook far outstrips that of information from trustworthy sources.
A huge amount of information is shared on the Facebook platforms; however, previous efforts to separate, highlight, or remove the false information from entering the platform have not been implemented successfully. Thus it has not proven itself to be a platform that is ready for hosting authentic news.
What could go wrong
A lot of news that is consumed these days by individuals is to validate their own biases. This problem is furthered due to internet filter bubbles, which is a state of intellectual isolation, where personalised internet searches influence what a person sees. People who want to consume news to validate their biases are also quick in sharing news with others.
If TL;DR were unable to detect fake news or identify such sources from which the news summaries are made, summary snippets of fake news could spread even faster among readers, especially among internet bubbles with similar interests or a political leaning.
Secondly, AI models will be trained to summarise the news using past articles. However, language evolves very fast, and new slangs and terms are introduced all the time. Hence, the chances of these new terms being accounted are low. This can bring out a different meaning in news summaries, and even news quotes could be taken out of contexts.
Facebook also appears to be planning to add an AI-powered assistant to TL;DR that will answer questions and help clear up anything the reader is uncertain about. There are two concerns regarding this — one is biased algorithms could provide biased answers to these questions, and secondly, it may not be able to give the correct answers if questions are poorly worded or ambiguous.
AI researchers in the past have been able to ‘fool’ AI that detects toxic comments. If newspapers or content writers with propaganda are able to do the same for TL;DR, they can ensure their articles are picked up more or are made more visible in searches, in spite of them being fake.
Every AI model has a margin of error. But an error on a critical topic like COVID-19 leading to mass-spreading of misinformation can have unprecedented consequences. This also raises the question of which news sources should Facebook summarise the news from.
Lastly, the deployed system should be able to explain certain decisions it is making while summarising articles. Lack of explainability will create even more problems.
Short summaries of articles can lead to a lot of information skipped or contexts changed. However, while reading long articles, people tend to skim through, missing out on a lot of important information as well. Thus news summarisation can have its advantages. But if one is to use AI for it, it is imperative to strike a balance between summarising stories and getting enough content so that important information is not left out.
Research related to the use of AI in the news industry has always emphasised on identifying tasks that are inherently human and cannot be automated. Hence even if the task of summarisation is automated using AI, the task of verification and correctness should always remain with humans.