Now Reading
Facebook Doesn’t Need The #10YearChallenge To Train ML Models; It’s Already Using AI Everywhere Else

Facebook Doesn’t Need The #10YearChallenge To Train ML Models; It’s Already Using AI Everywhere Else


Image for representative purpose only

Facebook is one of the most prominent figures in the AI space, leading the way with the development of multiple AI frameworks and applications in their products. However, the outcry has risen over their involvement in a recent ‘phenomenon’ called the #10YearChallenge.

For those unaware, the 10-year challenge is an activity wherein individuals post pictures of them from 10 years in the past, compared with now. This data was rumoured to be used by Facebook to train their machine learning algorithms. Considering Facebook’s position in the Machine Learning field and their flippant attitude towards user privacy, this is not much of an assumption. However, it is highly unlikely.



Facebook’s Torrid Affair With ‘Algorithms’

Facebook has been experimenting a widespread implementation of algorithms in their product as early as 2006, according to certain reports. Due to its large number of users, the News Feed for each of them could not be generated by humans alone. This would not only be impractical but with the network’s current strength of over 1 billion, it would be impossible.

In order to scale to this level in the first place, Facebook began looking into AI-powered algorithms to drive the delivery of content to users’ Feeds. This led to the creation of the Facebook AI Research (FAIR) team in 2013.

FAIR has since grown into a research facility with international connections and has labs in New York, Paris Montreal, Seattle and many more. They are known for their open-source attitude towards research and a vast amount of resources and published papers. Until now, they have created prominent tools and libraries such as PyTorch, fastText, FAISS, and Detectron, proceeding to make them open-source to advance the state of the AI field.

FAIR conducted research into reasoning, prediction, planning, and unsupervised learning, leading to multiple advancements in fields such as recurrent neural networks, self-supervised learning, and predictive world models. They have also published multiple papers on computer vision, with their tagging offering, Mask R-CNN, winning awards for its accuracy and speed.

Among other things, FAIR is also researching on scaling AI by exploiting unlabeled data, which is what is produced by a majority of Facebook’s users. This scaling would occur as the AI learned abstract representations of the world around it by processing unlabeled data such as images, video or audio. This, according to FAIR, allows them to “predict the consequences of their actions”, and “act in the real world”.

While this might make it seem like Facebook is creating an AI that can be used in real life like a human being, the existence of the Applied Machine Learning team all but confirms it.

Applied Machine Learning…In Stealth

Facebook’s Applied Machine Learning team was established in 2016, aiming to bring the applications of what FAIR was researching on. As their name suggested, the AML team was tasked with developing applications of the researched topics in Facebook’s products, or the other way around.

The idea of AML is to implement AI in a big way in Facebook’s tech stack, albeit at a cost. The AML team is focused on integrating AI into existing features and services in a way that does not change a lot for the end user. This will allow the user to take the service “for granted”, providing more data for the company to use.

The Mask R-CNN network was used to provide auto-generated tags for pictures, an improvement aimed at improving the experience for visually impaired users. A framework for text classification known as fastText, developed by FAIR, was implemented by AML to gain knowledge about the text in over 200 languages. fastText can also scale to billions of words and learn from untrained words, providing a formidable weapon that can be used by AML to detect hate speech or similar content.

Currently, FAIR is working on object detection in video by their DensePose project, which has already been shown to work with a high degree of accuracy. This will also undoubtedly be implemented by AML in the final product to ensure that Facebook’s already strong filters are further reinforced.

AML has also applied computer vision models to satellite images to create population density maps. This has allowed Facebook to determine where its broadband services are most needed, under their Internet.org and Free Basics services.

See Also

Disturbing Implications of Data-Hungry Algorithms

The first inkling of Facebook’s approach towards user privacy and using AI came in June of 2015 when they released their new product known as Moments. This seemingly innocuous feature allowed users to create photo albums that can be shared with specific users or a group of users. For the multitude of pictures uploaded on Facebook by multiple groups of friends, this seemed to be a good idea. Until Facebook revealed its capabilities.

Powered by AI, the service utilized a facial recognition model developed by AML. The model could recognize human faces with 98% accuracy, and identify one person out of 800 million others in less than five seconds. Realising the serious infraction into their privacy, users caused an uproar. The product was also not allowed to launch in European circles due to data privacy implications.

The News Feed seen by every Facebook user is different, due to their subscribed likes and pages. However, the sorting and filtering of this Feed is now more important than ever, requiring Facebook to keep up with advancing tech. Models that dictate the kind of articles that appear on News Feeds are updated every 15 minutes to two hours, making use of the multiple inflections that users engage in while they use the platform.

These inflections are treated as data points, allowing the algorithm to be fed with a constant stream of data from the users. While these capabilities make the product easier to use and add features as well, they result in the users continuing to use the platform. This creates an endless cycle of data that does not benefit the user, instead of allowing Facebook to collect it and use it for targeted advertising.

Conclusion

While Facebook might not have brought about the 10-year challenge to generate data for their algorithms, they are already utilizing AI and ML in a big way to deliver their products. User data is utilized to train other bots, designed to slowly be integrated into every walk of the social network’s features to continue feeding the data hunger created by their models.

While Facebook themselves are looking at scenarios where “humans may want to offload the unpleasantness or mundaneness” of certain activities to AI, the ethical consequences of their actions and their effects are yet to be seen on the larger world.


Enjoyed this story? Join our Telegram group. And be part of an engaging community.


Provide your comments below

comments

What's Your Reaction?
Excited
0
Happy
0
In Love
0
Not Sure
0
Silly
0
Scroll To Top