The session “Musicians & Data scientists Create Hits with Ensembles” was presented at the Deep Learning DevCon (DLDC) 2020 by Loveesh Bhatt who is the Senior Manager, Analytics at Ugam and Arpita Sur, who is the AVP / Head Cognitive Computing System at Ugam.
The primary aspect of the talk is how data scientists can leverage stacked ensembles to improve the performance of predictive models. Scheduled for 29th and 30th October, the DLDC 2020 conference has brought together the leading experts as well as the best minds of deep learning and machine learning industry from around the world.
The session started by playing the popular song by Coldplay and Chainsmokers called “Something Just Like This” twice, one in acoustic version and the second was original. The idea behind playing two versions of the songs was that, from a machine learning perspective, the acoustic version represented a simple machine learning model based out of linear regression. On the other hand, the original one represented a fusion of GBM and random forest model, that has the capabilities to create a powerful machine learning model.
Subscribe to our Newsletter
Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.
Bhatt then explained a brief on what ensemble modelling means and why it should be used. He stated that it is a combination of multiple learners to get better at making predictions. The three methodologies that are typically used are- bagging, boosting and stacking.

Ensemble model help with over-fitting such as high-variance as well as under-fitting such as high-bias problems. Bagging helps in mitigating the problem of high-variance and boosting help in mitigating the issue of high-bias.
Talking about the methodologies, Bhatt further explained the types of boosting that includes AdaBoost, CatBoost, Gradient Boosting and Xtreme Gradient Boosting. About the third methodology, Stacking, Bhatt discussed that it differs from bagging and boosting on two key points-
- Stacking often considers heterogeneous weak learners, whereas bagging and boosting mainly consider homogeneous weak learners.
- Multilevel stacking combines the power of base layer models and uses another layer called meta-layer to improve the overall performance of the model.
After discussing the methodologies, Bhatt then mentioned some of the applications of ensemble modelling, that includes prospect modelling, classification, customer order value regression, campaign performance mapping and assessment, upsell and cross-sell modelling, churn modelling and fraud and anomaly detection.
After the use-cases, Sur specifically talked about one of the use-cases that are being implemented at Ugam, which is classification and deals with computer vision. Sur stated that Computer Vision is different from other machine learning techniques, and it includes multiple applications, such as object detection, object recognition, object identification, object classification and more. Such applications can be applied across a broad range of use-cases including digitisation, competitive intelligence and information extraction.
Sur then explained one of the case studies at Ugam, which is using ensemble models to improve image classification. Here she explains how improving experience and conversion on e-commerce pages require apt product descriptions.
She also discussed how computer vision models could be trained to extract patterns, colours and other attributes from images with high-levels of accuracy. Sur concluded the talk by discussing the difference between machine learning and transfer learning and explained the typical architecture of an ImageNet Model, which is Inception V3.