Facebook is one of the most prominent figures in the AI space, leading the way with the development of multiple AI frameworks and applications in their products. However, the outcry has risen over their involvement in a recent \u2018phenomenon\u2019 called the #10YearChallenge.\n\nFor those unaware, the 10-year challenge is an activity wherein individuals post pictures of them from 10 years in the past, compared with now. This data was rumoured to be used by Facebook to train their machine learning algorithms. Considering Facebook\u2019s position in the Machine Learning field and their flippant attitude towards user privacy, this is not much of an assumption. However, it is highly unlikely.\n\nFacebook\u2019s Torrid Affair With \u2018Algorithms\u2019\n\nFacebook has been experimenting a widespread implementation of algorithms in their product as early as 2006, according to certain reports. Due to its large number of users, the News Feed for each of them could not be generated by humans alone. This would not only be impractical but with the network\u2019s current strength of over 1 billion, it would be impossible.\n\nIn order to scale to this level in the first place, Facebook began looking into AI-powered algorithms to drive the delivery of content to users\u2019 Feeds. This led to the creation of the Facebook AI Research (FAIR) team in 2013. \n\nFAIR has since grown into a research facility with international connections and has labs in New York, Paris Montreal, Seattle and many more. They are known for their open-source attitude towards research and a vast amount of resources and published papers. Until now, they have created prominent tools and libraries such as PyTorch, fastText, FAISS, and Detectron, proceeding to make them open-source to advance the state of the AI field.\n\nFAIR conducted research into reasoning, prediction, planning, and unsupervised learning, leading to multiple advancements in fields such as recurrent neural networks, self-supervised learning, and predictive world models. They have also published multiple papers on computer vision, with their tagging offering, Mask R-CNN, winning awards for its accuracy and speed.\n\nAmong other things, FAIR is also researching on scaling AI by exploiting unlabeled data, which is what is produced by a majority of Facebook\u2019s users. This scaling would occur as the AI learned abstract representations of the world around it by processing unlabeled data such as images, video or audio. This, according to FAIR, allows them to \u201cpredict the consequences of their actions\u201d, and \u201cact in the real world\u201d.\n\nWhile this might make it seem like Facebook is creating an AI that can be used in real life like a human being, the existence of the Applied Machine Learning team all but confirms it.\n\nApplied Machine Learning...In Stealth\n\nFacebook\u2019s Applied Machine Learning team was established in 2016, aiming to bring the applications of what FAIR was researching on. As their name suggested, the AML team was tasked with developing applications of the researched topics in Facebook\u2019s products, or the other way around.\n\nThe idea of AML is to implement AI in a big way in Facebook\u2019s tech stack, albeit at a cost. The AML team is focused on integrating AI into existing features and services in a way that does not change a lot for the end user. This will allow the user to take the service \u201cfor granted\u201d, providing more data for the company to use.\n\nThe Mask R-CNN network was used to provide auto-generated tags for pictures, an improvement aimed at improving the experience for visually impaired users. A framework for text classification known as fastText, developed by FAIR, was implemented by AML to gain knowledge about the text in over 200 languages. fastText can also scale to billions of words and learn from untrained words, providing a formidable weapon that can be used by AML to detect hate speech or similar content.\n\nCurrently, FAIR is working on object detection in video by their DensePose project, which has already been shown to work with a high degree of accuracy. This will also undoubtedly be implemented by AML in the final product to ensure that Facebook\u2019s already strong filters are further reinforced.\n\nAML has also applied computer vision models to satellite images to create population density maps. This has allowed Facebook to determine where its broadband services are most needed, under their Internet.org and Free Basics services.\n\nDisturbing Implications of Data-Hungry Algorithms\n\nThe first inkling of Facebook\u2019s approach towards user privacy and using AI came in June of 2015 when they released their new product known as Moments. This seemingly innocuous feature allowed users to create photo albums that can be shared with specific users or a group of users. For the multitude of pictures uploaded on Facebook by multiple groups of friends, this seemed to be a good idea. Until Facebook revealed its capabilities.\n\nPowered by AI, the service utilized a facial recognition model developed by AML. The model could recognize human faces with 98% accuracy, and identify one person out of 800 million others in less than five seconds. Realising the serious infraction into their privacy, users caused an uproar. The product was also not allowed to launch in European circles due to data privacy implications.\n\nThe News Feed seen by every Facebook user is different, due to their subscribed likes and pages. However, the sorting and filtering of this Feed is now more important than ever, requiring Facebook to keep up with advancing tech. Models that dictate the kind of articles that appear on News Feeds are updated every 15 minutes to two hours, making use of the multiple inflections that users engage in while they use the platform.\n\nThese inflections are treated as data points, allowing the algorithm to be fed with a constant stream of data from the users. While these capabilities make the product easier to use and add features as well, they result in the users continuing to use the platform. This creates an endless cycle of data that does not benefit the user, instead of allowing Facebook to collect it and use it for targeted advertising. \n\nConclusion\n\nWhile Facebook might not have brought about the 10-year challenge to generate data for their algorithms, they are already utilizing AI and ML in a big way to deliver their products. User data is utilized to train other bots, designed to slowly be integrated into every walk of the social network\u2019s features to continue feeding the data hunger created by their models. \n\nWhile Facebook themselves are looking at scenarios where \u201chumans may want to offload the unpleasantness or mundaneness\u201d of certain activities to AI, the ethical consequences of their actions and their effects are yet to be seen on the larger world.