Listen to this story
Hugging Face has built serious street cred in the AI & ML space in a short span. The north star of the company is to become the Github of machine learning. To that end, Hugging Face is doubling down on its efforts to democratise AI and ML through open source and open science. Today, the platform offers 100,000 pre-trained models and 10,000 datasets for NLP, computer vision, speech, time-series, biology, reinforcement learning, chemistry and more.
“Companies today can not only host models and datasets on Hugging Face, but test them, collaborate on them, run them in production and assess them for a more ethical use,” said Julien Chaumond, co-founder and CTO at Hugging Face.
Hugging Face started out as an NLP-powered personalised chatbot. To improve the NLP capabilities, the startup built a library with machine learning models and natural language datasets. Additionally, the founders open-sourced parts of the library.
Subscribe to our Newsletter
Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
“We realised that Conversational AI is the hardest task of ML. Our CSO Thomas Wolf was training really cool models and taking pre-trained models and adapting them to do Conversational AI. It was hard! Nonetheless, the tools required to do that were not limited to just achieving Conversational AI but could be applied to all NLP tasks and even most ML tasks too.
What we have seen in ML is the rise in transfer learning, where pre-trained models are used on large amounts of data; that works for all modalities, not just text.
It started with computer vision, when people worked on ImageNet, transfer learning really got amplified in 2017 – 2018 with the release of BERT and GPT-2, among others. But now we see transfer learning is working for every single subfield of machine learning like audio, time series, RIL, etc. The tools we have built like our hub works for everything in machine learning. So our focus is to double down on the hub and make sure we do everything for the community,” said Julien.
Also Read: Why Is Hugging Face Special?
ML for all
“I started working on ML back in 2005-06. But back then there were no real-world applications, so I mostly stuck with software engineering. What I feel now is there is a solid intersection of software engineering and machine learning. ML was a detached field from software engineering. It was a lot more “sciencey.” A lot of success we have had on Hugging Face comes from the fact that we have made a good blend of machine learning and software engineering. We can make machine learning a lot more accessible using best practices in software engineering. We make it easy for everyone to get into ML,” said Julien.
In 2017, Google and OpenAI introduced ‘transformers’ architecture, and took NLP to the next level. However, most of the companies looking to harness the power of NLP didn’t have the resources to build models from the ground up. Enter Hugging Face: The startup’s open-source library, launched around the same time, allowed these companies to ride the NLP wave.
“Hugging Face believes machine learning is the technology trend of the decade and is quickly becoming the default method of building technology. We realised early on that our platform must be extensible, modular, and open rather than an off-the-shelf API for machine learning to truly empower companies and the ML community at large,” said Clement Delangue, CEO and co-founder at Hugging Face.
“We never wanted to be the product of a single company, but rather the collaboration of hundreds of different companies. As a result, we’ve always taken an open-source, collaborative, platform-based approach to machine learning,” said Clement.
“Good” machine learning
“For the longest time, machine learning was driven by engineers and scientists. It was all about trying to achieve the best potential metrics on datasets and not a lot of people were actually thinking about how to build a good data set. It was mostly viewed as an engineering issue where you would try to maximise the accuracy of your model in a specific task. Over the last few years, ML as a field has matured a lot. Now, models are used in production for real-world usage which did not happen before,” said Julian.
Hugging Face has built a platform with a community-first approach (just like GitHub), giving tens of thousands of companies the ability to build machine learning models at a fraction of the cost.
“Everyone in the community is more aware that using a bad model (a model that is trained on a really biased data set or a super partial data set) that doesn’t reflect what is going to happen in the real world is really bad machine learning. Good machine learning is about trying to set out the data collection in a way that the data set is going to reflect the real-world usage of the model and is unbiased. You should be able to tweak your model in a way that is going to limit the biases or remove them entirely. Make it more transparent.
The community as a whole is improving on these subjects, and we are trying to help in any way we can,” he added.
Forget AGI! There are bigger things to focus on!
“Though many practitioners emphasise the long term impact of machine learning and eventually AGI that mostly points towards singularity or a “terminator” effect, we chose to focus on the limitations and challenges of ML that need to be tackled now like biases, privacy, and energy consumption. We want to focus on short-term issues like these rather than worry about AGI which we may or may not achieve in the next 50 years,” said Clement.
Clement believes through openness, transparency and collaboration, the ML community can drive responsible and inclusive progress, understanding and accountability. In August 2021, Hugging Face onboarded AI ethicist Dr Margaret Mitchell, who co-created Model Cards. She now guides Hugging Face to create tools to bring fairness to algorithms. “Many NLP models today are incredibly biased. So, I believe it is critical in our field today to simply acknowledge that and build transparency tools, bias mitigation tools so that we can take that into account and make sure we use them the right way,” he said.
The company aims to build a better AI founded on open source, open science, ethics and collaboration.
Hugging Face also has BigScience, a collaborative workshop around large language models gathering more than 1,000 researchers of all backgrounds and disciplines. The community is now working towards training the world’s largest open-source multilingual language model.
“There has always been this trend and this ability to release research for the entire field to have access to and be able to, for example, mitigate biases, create counter powers, and mitigate the negative effects that it can have. To me, it’s critical that researchers continue to be able to share their models and data sets publicly so that the entire field can benefit from them. Perhaps, just to complement what we’ve done with Hugging Face, an initiative called BigScience has been launched, bringing together nearly a thousand researchers from around the world,” said Clement.
Early this month, Hugging Face raised USD 100 million in Series C funding led by Lux Capital. Sequoia, Coatue and existing investors including Addition, Betaworks, AIX Ventures, Cygni Capital, Kevin Durant, Olivier Pomel (co-founder & CEO at Datadog) etc participated in the round.
“Given the value of machine learning and its increasing popularity, usage is deferred revenue. I don’t see a world in which machine learning is the default way to build technology and Hugging Face is the leading platform for it, and we don’t generate several billion dollars in revenue,”Clement Delangue
Hugging Face aims to create a positive impact on the AI field by focusing on responsible AI through openly sharing models, datasets, training procedures, and evaluation metrics. The team believes open source and open science bring trust, robustness, reproducibility, and continuous innovation.