Earlier this year Facebook demonstrated how serious they are about AI by open sourcing their NLP toolkit LASER. Now at the recently concluded 2-day event F8 Developer Conference held in San Jose,California, Facebook dished out more updates for the machine learning developers.
At Facebook, ML is used to discover patterns in code and build tools that improve developer productivity through code search, code recommendation and automatic bug fixing. An AI-enabled testing adaptive approach is also deployed to optimize products, infrastructure, machine learning models, marketing campaigns.
On the second day of the event, Chief Technology Officer Mike Schroepfer along with his AI team, talked about the AI tools we’re using to address a range of challenges across many products of Facebook.
Here are a few highlights:
Facebook started out very early in the domain of face recognition with machine vision. Its auto-tagging photo feature uses convolutional neural networks(CNNs). And these neural networks got better as people shared billions of photos over more than half a decade.
Now these CV systems have recognised progressively more image components over the years and can now perform detection of objects in both the foreground and the background with a single network. This results in better understanding of a photo’s overall context, as well as more computationally efficient image recognition.
This was achieved using a new approach to object recognition, called a panoptic feature pyramid network (Panoptic FPN), which enables instance segmentation tasks (for the foreground) and semantic segmentation tasks (for the background) at the same time, on a single, unified neural architecture.
Finding policy violations within video is orders of magnitude harder than in photos. In many videos, however, only a few clips have information that’s salient to a specific task, such as detecting bullying, and the rest are either redundant or irrelevant.
So a new approach where hashtagged videos functioned as weakly supervised data, meaning training examples whose labels had been applied by people, but without the precision of full supervision.
This led to a 5.1 percent improvement over the previous state of the art’s 77.7 percent accuracy.
The majority of the systems today rely on supervised training and the lack of enough training data makes these models not so reliable. With these new approaches Facebook pushes for self-supervision with support from AI pioneers like Yann LeCun who also is the Chief AI Scientist at Facebook.
Home » Top Big AI Announcements From Facebook’s F8 Developer Conference
PyTorch is one of the hottest products to come out of Facebook. Facebook uses PyTorch 1.0 end-to-end workflow for building and deploying translation and natural language processing (NLP) services at scale. These systems provide nearly 6 billion translations a day for applications such as real time translation in Messenger and, as the foundation of PyText, power complex models that rely on multitask learning in NLP.
Companies like Airbnb and Microsoft also leverage PyTorch to build conversational AI applications and other cognitive services. The ongoing evolution of PyTorch serves as an example of the power of open, community-led development in AI.
- Improved performance for common models such as CNNs
- Added support for multi device modules including the ability to split models across GPUs while still using Distributed Data Parallel (DDP)
- PyTorch-BigGraph: PBG is a distributed system for creating embeddings of very large graphs with billions of entities and trillions of edges.
Open Source Tools
- BoTorch: BoTorch is a research framework built on top of PyTorch to provide Bayesian optimization, a sample-efficient technique for sequential optimization of costly-to-evaluate black-box functions.
- Ax: Ax is an ML platform for managing adaptive experiments. It enables researchers and engineers to systematically explore large configuration spaces in order to optimize machine learning models, infrastructure, and products.
- BigGAN-PyTorch: This is a full PyTorch reimplementation that uses gradient accumulation to provide the benefits of big batches on as few as four GPUs.
- Curve-GCN: A real-time, interactive image annotation approach that uses an end-to-end-trained graph convolutional network (GCN)
The field of NLP is innovating every other day thanks to the constant effort of tech giants like Google, Microsoft, Facebook and Amazon. One thing common with these tech giants is their willingness to open source their innovations. Their belief in accelerated innovation through transparency has started to see fruition in the form of diversified real world applications from homepods to chatbots.
The developers team at Facebook have also made significant changes to Instagram and Whatsapp as well. Check more details here.
Provide your comments below
I have a master's degree in Robotics and I write about machine learning advancements. email:firstname.lastname@example.org