Now that Facebook users are still allowed to put deepfake videos under the guise of satire or parody, it keeps the issue mostly open-ended.
With each passing day, deepfake technology is becoming more sophisticated. Utilising artificial intelligence, one can alter familiar faces and voices to create convincing – and false – video and audio. So imagine a presidential candidate speaking on the campaign trail and some malicious actor took that over by altering what he/she is saying entirely and creating a dialogue which is completely fabricated.
Misinformation which is made possible through deepfakes is not always evident to an average user is no longer allowed on the platform. And, this is the reason why all social media companies are finding ways to curb deepfakes. In a race to stop the menace, Facebook has made the first move.
In a milestone move, Facebook recently stated that malicious deepfakes are being banned on the platform due to the potential to mislead users. This announcement comes on the same week when Facebook’s vice president & global policy management, Monika Bickert is to appear before the US House Committee on Energy and Commerce.
Facebook isn’t banning all deepfake videos; instead, it will permit the deepfakes to be used in parody or satire, and will also allow clips that were edited only to cut out or change the order of words. Those exceptions could open up a grey area in which fact-checkers must decide what content is allowed and what is deleted. There is still widespread concern that this deepfake policy isn’t going to prevent the kinds of abuse that we have seen in the past.
Facebook’s New Strategy For Deepfakes
Facebook has addressed how it will tackle both deepfakes and other kinds of manipulated media content. The social media giant stated that its strategy has various components, from investigating AI-generated content and malicious behaviours like fake accounts which spread misinformation, to partnering with academic organisations, government as well as the tech industry to expose the entities behind these efforts. Facebook has announced its engagement with more than 50 global experts with technical, policy, media, legal, civic and academic backgrounds to improve the science of detecting manipulated media.
From now onward, Facebook will delete misleading manipulated content if it meets two specific criteria:
1) The video or audio content has been edited or synthesised beyond adjustments for clarity or quality, in a manner that is not apparent to an average human and would likely mislead people into believing that a subject of the video stated something that they didn’t say in reality.
2) It is the result of artificial intelligence/ machine learning that combines, replaces or superimposes content on a video to make it seem genuine when, in fact, it is not.
There are exceptions made to the criteria, in which it does not include content that is intended to be a parody or satirical, or video content which has been modified only to edit or switch the order of actually said words. But, video content that does not conform to the standards for removal can still be reviewed by Facebook using its independent third-party fact-checkers that are employed by Facebook’s 50 fact-checking partner companies across the globe in over 40 languages.
According to Facebook, if it removed all manipulated videos labelled by the company’s fact-checkers as false, the videos would still be available on other platforms on the web. By making an exception for parody and satire as well as content which has been jumbled for entertainment purposes, it will not thwart the freedom of expression. Regardless, Facebook has reported that it will label such videos and audio contents as false to provide people with the right context and information.
On the technology side, Facebook had earlier started a campaign which is the only one-of-its-kind to identify deepfakes. It launched the deepfake detection challenge, which has spurred people from all over the globe into research and open source tools to detect deepfakes. The open-source project was supported by $10 million in grants including a cross-sector coalition of organisations including the partnership among AI, Cornell Tech, the University of California Berkeley, MIT, WITNESS, Microsoft, the BBC, and AWS, along with the civil society and the technology, media and academic communities. The deepfake detection challenge is live on Kaggle. According to Facebook, this rapidly evolving deepfake technology presents a huge technical challenge to detect, and no single entity can solve this problem on its own.
What Are The Issues With Facebook’s New Deepfake Policy?
Facebook’s new policy on deepfakes has not gone well with lawmakers and experts as there were malicious actors from Russia that sought to spread misinformation and exploit loopholes to weaponise social media. The concern here is that the same could happen under Facebook’s new deepfake policy.
One of the most well-known and controversial examples of manipulated media was the altered video of Speaker of the US House of Representatives Nancy Pelosi that had gotten more than 2.4 million views on Facebook since its posting last May. Though Pelosi’s speech was slowed and distorted in the video to make her sound drunk, the effect was created with relatively simple video-editing software, making it a “cheapfake” as opposed to a deepfake that would fall under the new rule at Facebook. That kind of video would still be allowed under Facebook’s policy. Facebook said it would fact check it but wouldn’t outright ban it and so such things could plague the 2020 elections.
According to experts, while Facebook’s effort to create a coherent policy on misinformation is a positive step, but the policy may be too narrowly construed. This policy against deepfake also arose questions like — why the company had decided to focus on deepfakes and not the broader issue of intentionally misleading videos. Recently, even the democratic presidential candidate Joe Biden said the company’s deepfake policy only provides the illusion of progress.
Since these cheapfake videos are as misleading as ones made through more sophisticated techniques, Facebook must address the real issue of misleading content, rather than focusing on technology, digital forensics, expert Hany Farid told Washington Post.
Now that Facebook users are still allowed to put deepfake videos under the guise of satire or parody, it keeps the issue mostly open-ended. Also, lesser forms of manipulation are permitted, and experts are concerned that these loopholes can still be exploited by malicious actors to spread misinformation. Facebook, on the other hand, does not want to become an arbiter of speech and wants to allow people to express themselves using satirical content which uses deep fake, which also is fair under the freedom of expression. The challenge, of course, would be the difficulty to determine which content qualifies as parody or satire, and not misinformation.