Now Reading
This Million-Dollar Challenge May Put An End To All Deepfakes

This Million-Dollar Challenge May Put An End To All Deepfakes


The world has become more connected than ever before. An opinion can fly at light speed across continents and a revolution can be sparked by remote players in a matter of hours. As the technology keeps improving, new methods are being discovered by fraudulent players to up their ante.



Most platform owners tussle with the after-effects of an aggravation. To identify the perpetrator and discourage them to carry out any illicit, immoral social engineering would require automated detection techniques.

“Deepfake” techniques, which present realistic AI-generated videos of real people doing and saying fictional things, have significant implications for determining the legitimacy of information presented online. Yet the industry doesn’t have a great data set or benchmark for detecting them.

Although Deepfakes may look realistic, the fact that they are generated from an algorithm instead of real events captured by camera means they can still be detected and their provenance verified. 

Several promising new methods for spotting and mitigating the harmful effects of Deepfakes are coming on stream, including procedures for adding ‘digital fingerprints’ to video footage to help verify its authenticity. As with any complex problem, it needs a joint effort from the technical community, government agencies, media, platform companies, and online users to combat their negative impact.

To thwart unwanted consequences of Deepfake, Facebook, the Partnership on AI, Microsoft, and academics from Cornell Tech, MIT, University of Oxford, UC Berkeley, University of Maryland, College Park, and University at Albany-SUNY are coming together to build the Deepfake Detection Challenge (DFDC).

What Is This Challenge

The goal of the challenge is to produce technology that everyone can use to better detect when AI has been used to alter a video in order to mislead the viewer. The Deepfake Detection Challenge will include a data set and leaderboard, as well as grants and awards, to spur the industry to create new ways of detecting and preventing media manipulated via AI from being used to mislead others. 

The governance of the challenge will be facilitated and overseen by the Partnership on AI’s new Steering Committee on AI and Media Integrity, which is made up of a broad cross-sector coalition of organisations including Facebook, WITNESS, Microsoft, and others in civil society and the technology, media, and academic communities.

To ensure the quality of the data set and challenge parameters, they will initially be tested through a targeted technical working session this October at the International Conference on Computer Vision (ICCV). 

The full data set release and the DFDC launch will happen at the Conference on Neural Information Processing Systems (NeurIPS) this December. Facebook will also enter the challenge but not accept any financial prize. 

The challenge will launch in late 2019 with the release of a dataset. The challenge will run through the end of March 2020. Participants can download the created dataset for training models. Entrants will also submit code into a black box environment for testing. 

The challenge will be global and participants will need to agree to the dataset license before participating in the challenge.

See Also

Imagining A Future Devoid Of Electronic Manipulation

There is a lot of misinformation circulated and it gets worse with the popularity of the entities involved. This virtual wildfire decouples the users from the truth and they usually end up in their own echo chambers. As the attention of the world media shifts towards the elections in the US, there will probably be attempts at foul play and having readily available tools to curate the data is almost mandatory.

Industry experts like Prof. Philip Torr from the University of Oxford believe that manipulated media being put out on the internet, to create bogus conspiracy theories and to manipulate people for political gain, is becoming an issue of global importance, as it is a fundamental threat to democracy, and hence freedom.

This is a constantly evolving problem, much like spam or other adversarial challenges, and the hope is that by helping the industry and AI community come together we can make faster progress.

Identifying tampered content is technically challenging as Deepfakes rapidly evolve, so it is necessary for the AI research pioneers to join hands to build better detection tools.

Challenges like the Deepfake Detection Challenge look promising as they are designed to incentivise rapid progress in this area by inviting participants to compete to create new ways of detecting and preventing tampered media. This not only opens up this field to new ideas but will also create awareness amongst the machine learning community regarding various challenges of keeping adversarial attacks at bay.


Enjoyed this story? Join our Telegram group. And be part of an engaging community.

Provide your comments below

comments

What's Your Reaction?
Excited
0
Happy
0
In Love
0
Not Sure
0
Silly
0
Scroll To Top