Advertisement

Active Hackathon

Microsoft Unveils Falcon: AI Framework To Secure Computation of AI Models On Distributed Systems

Researchers from Microsoft, Princeton University, Technion and Algorand Foundation recently introduced a new framework known as Falcon. Falcon is an end-to-end 3-party protocol that can be used for fast and secure computations of deep learning algorithms on larger networks.

Today, a vast amount of private data and sensitive information is continuously being generated. According to the researchers, combining this data with deep learning algorithms can transform the current social and technological scenario. 

THE BELAMY

Sign up for your weekly dose of what's up in emerging technology.

Behind Falcon

Falcon is a deep learning framework that provides support for both training and inference with malicious security guarantees. Falcon consists of hybrid integration of ideas from SecureNN and ABY3, along with newer protocol constructions for privacy-preserving deep learning.

The codebase of Falcon is written in C++ in about 12.3k LOC and is built using the communication backend of SecureNN. Falcon provides a cryptographically secure framework, where the client data is split into unrecognizable parts among several non-colluding entities. 

There are three main advantages of this framework, which are:

  • This framework is highly expressive. Falcon is the first-ever secure framework to support high capacity networks with over a hundred million parameters such as VGG16. It is also the first framework to support batch normalization. 
  • Falcon guarantees security with abort against malicious adversaries, assuming an honest majority. It ensures that the protocol always completes with correct output for honest participants or aborts when it detects the presence of a malicious adversary.
  • This framework presents new theoretical insights for protocol design that make it highly efficient and allow it to outperform existing, secure in-depth learning solutions.

Evaluating Falcon

To evaluate the framework, the researchers used six diverse networks, ranging from simple 3-layer multi-layer perceptrons (MLP) with about 118,000 parameters to large networks with about 16-layers having 138 million parameters. 

It has been trained on popular datasets such as MNIST, CIFAR-10 and Tiny ImageNet datasets as appropriate based on the network size. According to the researchers, this framework is the first secure machine learning framework to support the training of high capacity networks, such as AlexNet and VGG16 on the Tiny ImageNet dataset.

The researchers then performed an extensive evaluation of the deep learning framework in both the LAN and WAN setting, as well as semi-honest and malicious adversarial settings. The evaluation resulted in the performance improvement over SecureNN, which is a 3-party secure computation for Neural Networks. They also claimed that Falcon is an optimized 3-PC framework concerning communication, which is often considered as the main bottleneck

in multi-party computation protocols.

Contributions of This Project

According to the researchers, Falcon makes secure deep learning techniques practical through the following contributions: 

  • Malicious Security: This framework provides strong security, which guarantees in an honest-majority adversarial setting. It proposes new protocols that are secure against corruption and ensures that either the computation always correctly completes or aborts detecting malicious activity. 
  • Improved Protocols: Falcon combines techniques from SecureNN and ABY3 that result in improved protocol efficiency.
  • Expressiveness: Falcon is the first framework to demonstrate support for Batch-Normalization layers in private machine learning. It supports both private training and private inference, which makes this framework expressive. 

Wrapping Up

According to the researchers, Falcon provides malicious security and provides several orders of magnitude performance improvements over prior work. The deep learning framework proposed more efficient protocols for common machine learning functionalities while providing stronger security guarantees. It has been claimed as the first secure deep learning framework to examine performance over large-scale networks such as AlexNet and VGG16 and massive datasets such as Tiny ImageNet.

Read the paper here.

More Great AIM Stories

Ambika Choudhury
A Technical Journalist who loves writing about Machine Learning and Artificial Intelligence. A lover of music, writing and learning something out of the box.

Our Upcoming Events

Conference, Virtual
Genpact Analytics Career Day
3rd Sep

Conference, in-person (Bangalore)
Cypher 2022
21-23rd Sep

Conference, in-person (Bangalore)
Machine Learning Developers Summit (MLDS) 2023
19-20th Jan

Conference, in-person (Bangalore)
Data Engineering Summit (DES) 2023
21st Apr, 2023

3 Ways to Join our Community

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Telegram Channel

Discover special offers, top stories, upcoming events, and more.

Subscribe to our newsletter

Get the latest updates from AIM
MOST POPULAR
How Data Science Can Help Overcome The Global Chip Shortage

China-Taiwan standoff might increase Global chip shortage

After Nancy Pelosi’s visit to Taiwan, Chinese aircraft are violating Taiwan’s airspace. The escalation made TSMC’s chairman go public and threaten the world with consequences. Can this move by China fuel a global chip shortage?

Another bill bites the dust

The Bill had faced heavy criticism from different stakeholders -citizens, tech firms, political parties since its inception

So long, Spotify

‘TikTok Music’ is set to take over the online streaming space, but there exists an app that has silently established itself in the Indian market.

[class^="wpforms-"]
[class^="wpforms-"]