Now Reading
Top Papers Presented At DLDC 2021

Top Papers Presented At DLDC 2021

  • All the accepted papers are published in Lattice – an international peer-reviewed and refereed journal on machine learning, hosted and managed by the Association of Data Scientists (ADaSci).

Last month, the Association of Data Scientists (ADaSci), in partnership with Analytics India Magazine (AIM), concluded the two-day long 2021 edition of the Deep Learning Developers Conference (DLDC). The conference was held virtually on 23-24 September. The conference on deep learning featured 30 keynote speakers and over 500 attendees. In addition, some of the leading professionals and researchers from reputed organisations presented their feature talks and research papers.

Day 2 of the conference provided the platform to submit valuable research papers in the machine learning domain. All the accepted papers are published in Lattice – an international peer-reviewed and refereed journal on machine learning, hosted and managed by the Association of Data Scientists (ADaSci). Let’s go through some of the top papers presented at DLDC 2021:

How To Start Your Career In Data Science?

1| Title: Time Expression Extraction and normalisation in Industrial Setting

By: Piyush Arora, Senior AI researcher at American Express AI Labs

About: The senior AI Researcher presented TEEN — an industry grade solution to the problem of time expression extraction and normalisation (Timex). Extraction and normalisation of temporal units is a challenging problem due to several factors:

  • same time units may sometimes be expressed in different ways,
  • inherent ambiguity in natural languages leading to multiple interpretations, and
  • context-sensitive nature of natural languages.

While several academic and industrial approaches have presented solutions towards Timex, building an industrial-strength solution involves additional challenges in the form of user expectations, the need for delivering high precision, and lack of training corpora. Further, Piyush elaborated on how TEEN carefully mitigates these challenges. He demonstrated how the proposed approach compares with various state-of-the-art baselines on textual data from the finance industry. “We further categorised inadequacies of these baselines in an industrial setting. Finally, we provide insights gathered through the observations we made and the lessons we learned while designing TEEN to work in an industrial setting,” said Piyush.

2| Title: Classification of Quasars, Galaxies, And Stars Using Multi-Modal Deep Learning

By: Bharath Kumar Bolla, Senior Data Scientist at Verizon

About: As we all know, the universe is the vast expanse of cosmic space consisting of billions of galaxies. Galaxies are made of billions of stars that revolve around a gravitation centre of the black hole. Quasars are quasi-stellar objects which emit electromagnetic radiation more potent than the luminosities of the galaxies combined.

In this paper, the fourth version, the Sloan Digital Sky Survey (SDSS-4), Data Release 16 dataset, was used to classify the SDSS dataset into galaxies, stars, and quasars using machine learning and deep learning architectures. The researchers efficiently utilise both image and metadata in tabular format to build a novel multi-modal architecture and achieve state-of-the-art results. For the tabular data, they compared classical machine learning algorithms (Logistic Regression, Random Forest, Decision Trees, Adaboost, LightGBM, etc.) with artificial neural networks. In addition, deep learning architecture such as Resnet50, VGG16, EfficientNetB2, Xception, and Densenet121 have been used for images. Our works shed new light on multi-modal deep learning with their ability to handle imbalanced class datasets. The multi-modal architecture further resulted in higher metrics (accuracy, precision, recall, F1 score) than models using only images or tabular data.

3| Title: Hyperlocalisationn of leaks in piping and cabling systems using reinforcement learning

By: Indrajit Kar, Head of AI at Siemens Advanta

About: Leaks have undoubtedly been one of the biggest problems plaguing piping and cabling systems across industries like electricity and power, building and smart cities, oil and gas, etc. Addressing these leaks in time becomes paramount as failure leads to a complete standstill of the transportation chain. Unfortunately, most AI-based leak detection systems have failed to reach the deployment state as these systems are prone to output false positives. It is imperative to observe that these leaks don’t occur every day; or in other words, they are rare events. But when they do occur, these leaks more often than not go unnoticed. Due to the insufficient number of identified leak points, it becomes difficult to build an AI-based model

In an attempt to aid/replace rule-based and physics-based leak detection systems, this paper proposes a novel AI-based leak detection solution using reinforcement learning which not only reduces false positives but also extends itself to multi-armed bandit-based leak localisation. By using this methodology, we model the latent behaviour of any piping or cabling systems and provide a Q-learning based shortest path recommendation to help the maintenance team reach the leak node in a short amount of time.

4| Title: Global-Local Scalable Explanations Using Linear Model Tree

By: Narayanan Unny E., Head of Machine Learning Research, American Express AI Lab

About: With the ever-increasing use of complex machine learning models in critical applications, explaining the model’s decisions has become necessary. With applications spanning from credit scoring to healthcare, the impact of these models is undeniable. 

In this paper, researchers come up with a novel pipeline for building explanations. A GAN is employed for generating synthetic data, while a piecewise linear model in the form of Linear Model Trees is used as the surrogate model. When these two techniques are combined, it provides a powerful data structure capable of explaining complex ML models. Additionally, the novelty of this data structure is that it provides an explanation in the form of both decision rules and feature attributions. Moreover, the data structure also enables the researchers to build a global explanation model, which is computationally efficient, such that it scales with large amounts of data.

See Also
Data Science Job

5| Title: Predicting Custom Ad Performance Metric using Contextual Features

By: Prateek Kulkarni, Data Science Team Lead at MiQ Digital; Divyaprabha M, Data Scientist at MiQ Digital

About: This paper proposes a machine learning-based approach to predicting future ad-campaign performance by focusing on contextual features such as browser, operating system, device type, and so on.

  • First, a custom metric encompassing cost, performance and campaign delivery is developed. This metric’s predicted value is used to score and recommend targeting strategies. To generate new features, feature engineering techniques  (The CMO’s Guide to Programmatic Buying, n.d.) such as Lag feature generation, statistical encodings, graph-based embeddings for websites and cyclical feature encodings are prepared for downstream tasks such as CTR prediction.
  • This paper then compares and chooses the best performing linear, tree-based, and deep learning models for our proposed hypotheses.
  • Finally, they develop heuristic criteria for offline testing of the recommended strategies and calculate theoretical performance uplift compared with older ranker score methodologies, which serve as our baseline results. The proposed solution outperforms the baseline solution and proposes a novel way of recommending strategies.

6| Title: Analysis of Sectoral Profitability of the Indian Stock Market Using an LSTM Regression Model

By: Jaydip Sen, Professor of Data Science and Artificial Intelligence at Praxis Business School

About: Predictive model design for accurately predicting future stock prices has always been an interesting and challenging research problem. The task becomes complex due to the volatile and stochastic nature of the stock prices in the real world, which is affected by numerous controllable and uncontrollable variables. 

This paper presents an optimised predictive model built on long- and short-term memory (LSTM) architecture for automatically extracting past stock prices from the web over a specified time interval, predicting their future prices for a specified forecast horizon, and forecasts the future stock prices. The model is deployed to make, buy and sell transactions based on its predicted results for 70 important stocks from seven different sectors listed in India’s National Stock Exchange (NSE). First, the profitability of each sector is derived based on the total profit yielded by the stocks in that sector over the period from Jan 1, 2010, to Aug 26, 2021. Then, the sectors are compared based on their profitability values. The prediction accuracy of the model is also evaluated for each sector. The results indicate that the model is highly accurate in predicting future stock prices.

Wrapping up

The DLDC conference not just covers every aspect of deep learning but promotes researchers to come up with path-breaking research papers for submission. ADaSci’s Lattice aims to publish high-quality research articles from data science and machine learning researchers and practitioners. Before being published, all of Lattice’s articles go through a thorough, doubly-blinded review process. The journal maintains a roster of reviewers and editors who are all affiliated with notable universities or organisations and contribute to the functioning of the journal.

Subscribe to our Newsletter

Get the latest updates and relevant offers by sharing your email.
Join our Telegram Group. Be part of an engaging community

Copyright Analytics India Magazine Pvt Ltd

Scroll To Top