Last week, Microsoft’s News service MSN laid-off contract-based news curators and replaced them with AI, sparking a debate on the fate of jobs in the face of automation. However, the decision of using AI backfired when one of the AI curated articles misrepresented an interviewee.
As appeared first on The Guardian, an early rollout of the software resulted in a story about the singer Jade Thirlwall’s personal reflections on racism being illustrated with a picture of her fellow band member Leigh-Anne Pinnock.
“@MSN If you’re going to copy and paste articles from other accurate media outlets, you might want to make sure you’re using an image of the correct mixed-race member of the group,” wrote Thirlwall on social media.
The AI curator even reported its own misfiring, setting a new precedent for self-criticism in the news circles. Adding to the irony, the article, however, was removed from the feed manually!
Can AI Be Made Accountable?
For the past couple of years, we have been hearing about the increased role of automated moderation on Twitter and Facebook. These large companies have contract-based human moderators along with an algorithmic-based moderator to bring down misinformation. The intent here is to safeguard the conversations, in other words, claiming to play the role of gatekeepers of truth and the right to speak.
So far, the content sharing and moderation have been manual, and those who have been generating and curating content faced the ire of the public for misinformation and silencing of individuals to cater to the demands of other groups. Throughout this ordeal, the perpetrators and facilitators were named and shamed. There is no escape on social media. All our opinions are archived safely in the vaults of our adversaries. So, people have this subconscious burden of feeling accountable for what they say and write. The incentive here is a societal shame, which has guided civilisations so far.
Now, if news outlets start bringing AI into the mix and if that algorithm misfires as in the case of Thirlwall, it just defeats the cause of having AI in the first place; to have an unbiased alternative to human reporters. Moreover, who would be held accountable when AI goes berserk? Is it the researcher who created the model or the organisation that applies it or will AI get a free pass for it being in a beta phase? When the United States Patent Office was approached for a patent grant to an AI for invention, the patent office discarded the claim saying that a patent belongs to the innovator. A machine learning model is only as good as its architecture i.e it is only as good as the data it is fed (or trained) by the human creator. So, should we isolate the innovator who probably had no real idea of where it will be used? The role of accountability and integrity in the murky waters of journalism has always been questioned, and the role of AI should only be to mitigate these challenges rather than facilitating them.
Why Human Vs AI Is A Flawed Narrative
Humans historically have been mass producers and consumers of propaganda. We are biased, every message comes with a subtle agenda — even this article. But what we, as humans, have is the ability to practice civility in the face of sensitivity. We understand cultural nuances and know where to draw the line. However, the sensibilities definitely took a hit as the majority of the conversations moved online. Now everyone with a Twitter account is their own reporter, a mini-media house in operation. They can tweet anonymously from the safe havens of their couches without the risk of facing consequences of their actions.
The advent of AI into the realms of free speech too, has been a consequence of our lack of self-regulating on the internet. AI is here. We have to learn to live with it.
There are machine learning models such as OpenAI’s GPT-3 that can write human-like news articles. So, it won’t be long before we see AI replacing writers! But the idea of replacement and the fear of getting replaced are both blown out of proportions.
And, for those who fear an AI takeover, there is nothing wrong in having a tinge of paranoia. It can accelerate the way in which we do our traditional jobs. For example, Microsoft, which has been criticised for allowing AI curation, has introduced Content Insights & Discovery Accelerator, an AI tool for journalists and publishers from Microsoft News.
For a working journalist, stated MSN, researching a story could require mining decades of content across different kinds of media to gather historical context and insight that can be applied to today’s problems, or combing through a “data dump” for pressing, time-sensitive information. Exploring these data sets is painstakingly manual, time-consuming, dependent on resources and subject to human error or bias. AI is the right candidate in such scenarios. This is one of the many examples of creating new opportunities using AI as an assistant.
AI as we know today is far from reaching the cognition of human level and right now it can only play the role of an assistant to a human creator. So having a completely automated information curator cannot be justified unless some organisation wants to hide behind the veil of AI by shifting the blame to a non-human entity.