Facebook Loves Self-Supervised Learning. Period.

Facebook believes that self-supervision is one step on the path to human-level intelligence.
Facebook Loves Self-Supervised Learning. Period.

Facebook’s chief AI scientist Yann LeCun’s influence seems to have rubbed off on the team, taking a path less travelled – a journey towards self-supervision. This path/method does not rely on data that’s been labelled for training purposes by humans – or even on weakly-supervised data like images and videos with public hashtags – instead, self-supervision takes advantage of entirely unlabelled or new data. 

What was once a research strategy for Facebook AI teams – over the years – has turned into an area of scientific breakthrough – where they have been delivering strong internal results, with some self-supervised language understanding models, libraries, frameworks, and experiments consistently beating traditional systems or fully supervised models.  

For instance, its pre-trained language model XLM, first introduced in 2019, is accelerating important applications at Facebook today, like proactive hate speech detection. Its XLM-R, which uses RoBERTa architecture, improves hate speech classifiers in multiple languages across Facebook and Instagram. 

THE BELAMY

Sign up for your weekly dose of what's up in emerging technology.

Facebook AI Research has made significant strides in self-supervised learning in the last two years, including techniques like MoCo, Textless NLP, DINO, 3DETR, DepthContrast, etc. 

Here’s a timeline of Facebook’s journey towards self-supervised learning, highlighting some of the key milestones where it has implemented self-supervised methods in one way or another. 

Facebook Loves Self-Supervised Learning. Period.

The Chilling Effect 

Facebook is currently exploring self-supervised learning in various fields, including robotics, visual reasoning, and dialogue systems, etc. It believes that these efforts will help them further improve tools to keep people safe on their platform, alongside helping them connect across different languages and advance AI in new ways. 

However, the recent whistleblower and outage controversies say otherwise. Recently, Facebook whistleblower Frances Haugen claimed that the company puts profits over people’s safety. To this, Facebook chief Mark Zuckerberg, in a blog post, said that many of the claims made by the whistleblower, based on the document she leaked – ‘do not make any sense.’ “If we wanted to ignore ‘research,’ why would we create an ‘industry-leading research programme‘ to understand these crucial issues in the first place?” he added.   

Further, Zuckerberg said if we did not care about fighting harmful content, why are we employing so many people dedicated to this, compared to other companies in the space — even those bigger than us? “If we wanted to hide our ‘results,’ why would we have established an ‘industry-leading standard’ for transparency & reporting on what we are doing?” 

Moving past the controversy, Facebook AI research scientist Alex Berg, two years ago, had said that face recognition approaches – for example – are surprisingly accurate and robust, to the point where face verification is sometimes used as a primary method to unlock mobile phones. Facebook is working on self-supervision in this area, where an algorithm could identify potential attributes and learn to recognize them without supervision. 

Towards Human-Level Intelligence 

Today, Facebook has become synonymous with self-supervision – perhaps the most important frontier of artificial intelligence – replacing data-limited supervised learning with unlimited self-supervised learning. 

Interestingly, Facebook has also created a lot of buzz around self-supervised learning, to an extent where it is ahead of its peers – Microsoft, Amazon, Google and DeepMind. The graph below illustrates this purely on the basis of work and experiments published on its website around self-supervised learning. 

Facebook Loves Self-Supervised Learning. Period.

Since its inception in 2013, Facebook AI Research (FAIR) has continued to expand its research efforts in self-supervised learning, training machines to reason, and training them to plan and conceive complex sequences of actions via open scientific research. Facebook believes that self-supervision is one step on the path to human-level intelligence, and in the long run, the progress would be cumulative. 

More Great AIM Stories

Amit Raja Naik
Amit Raja Naik is a seasoned technology journalist who covers everything from data science to machine learning and artificial intelligence for Analytics India Magazine, where he examines the trends, challenges, ideas, and transformations across the industry.

Our Upcoming Events

Conference, in-person (Bangalore)
Machine Learning Developers Summit (MLDS) 2023
19-20th Jan, 2023

Conference, in-person (Bangalore)
Rising 2023 | Women in Tech Conference
16-17th Mar, 2023

Conference, in-person (Bangalore)
Data Engineering Summit (DES) 2023
27-28th Apr, 2023

Conference, in-person (Bangalore)
MachineCon 2023
23rd Jun, 2023

Conference, in-person (Bangalore)
Cypher 2023
20-22nd Sep, 2023

3 Ways to Join our Community

Whatsapp group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our newsletter

Get the latest updates from AIM