ML and its services are only going to extend their influence and push the boundaries to new realms of the technology revolution. However, deploying ML comes with great responsibility. Though efforts are being made to shed its black box reputation, it is crucial to establish trust in both in-house teams and stakeholders for a fairer deployment. Companies have started to take machine learning model management more seriously now. Recently, a machine learning company Comet.ml, based out of Seattle and founded in 2017, announced that they are making a $4.5 million investment to bring state-of-the-art meta-learning capabilities to the market.
Bringing Meta-Learning To Markets
The tools developed by Comet.ml enable data scientists to track, compare, monitor, and optimise model development. Their announcement of an additional $4.5 million investment from existing investors – Trilogy Equity Partners and Two Sigma Ventures – is aimed at boosting their plans to domesticate the use of machine learning model management techniques to more customers.
Since their product launch in 2018, Comet.ml has partnered with top companies like Google, General Electric, Boeing and Uber. This elite list of customers use comet.al services, which have enterprise-level toolkits, and are used to train models across multiple industries spanning autonomous vehicles, financial services, technology, bioinformatics, satellite imagery, fundamental physics research, and more.
Talking about this new announcement, one of the investors, Yuval Neeman of Trilogy Equity Partners, reminded that the professionals from the best companies in the world choose Comet and that the company is well-positioned to become the de-facto Machine Learning development platform.
This platform, says Neeman, allows customers to build ML models that bring significant business value.
According to a report presented by researchers at Google, there are several ML-specific risk factors to account for in system design, such as:
- Boundary erosion
- Hidden feedback loops
- Consumers who are undeclared
- Data dependencies, and
- Configuration issues
Debugging all these issues require round the clock monitoring of the model’s pipeline. For a company that implements ML solutions, it is challenging to manage in-house model mishaps.
If we take the example of Comet again, its platform provides a central place for the team to track their ML experiments and models, so that they can compare and share experiments, debug and take decisive actions on underperforming models with great ease.
Predictive early stopping is a meta-learning functionality not seen in any other experimentation platforms, and this can be achieved only by building on top of millions of public models. And this is where Comet’s enterprise products come in handy. The freedom of experimentation that these meta learning-based platforms offer is what any organisation would look up to. Almost all ML-based companies would love to have such tools in their arsenal.
Talking about saving the resources, Comet.ml in their press release, had stated that their platform led to the improvement of model training time by 30% irrespective of the underlying infrastructure, and stopped underperforming models automatically, which reduces cost and carbon footprint by 30%.
Irrespective of the underlying infrastructure, it stops underperforming models automatically, which reduces cost and carbon footprint by 30%.
The enterprise offering also includes Comet’s flagship visualisation engine, which allows users to visualise, explain, and debug model performance and predictions, and a state-of-the-art parameter optimisation engine.
Is This The Dawn Of ML Model Management As A Service?
When building any machine learning pipeline, data preparation requires operations like scraping, sampling, joining, and plenty of other approaches. These operations usually accumulate haphazardly and result in what the software engineers would like to call a pipeline jungle.
Now, add in the challenge of forgotten experimental code in the code archives. Things only get worse. The presence of such stale code can malfunction, and an algorithm that runs this malfunctioning code can crash stock markets and self-driving cars. The risks are just too high.
So far, we have seen the use of ML for data-driven solutions. Now the market is ripe for solutions that help those who have already deployed machine learning. It is only a matter of time before we see more companies setting up their meta-learning shops or partner with third-party vendors.