MITB Banner

Watch More

Top Deep Learning Based Time Series Methods

The components of time-series can be as complex and sophisticated as the data itself. With every passing second, the data obtained multiplies and modelling becomes tricky.

For instance, social media platforms, the data handling chores get worse with their increasing popularity. Twitter stores 1.5 petabytes of logical time series data and handles 25K query requests per minute. There are more critical applications of time series modelling, such as IoT and on various edge devices. Sensors of smart buildings, factories, power plants, and data centres generate vast amounts of multivariate time series data. Conventional anomaly detection methods are inadequate due to the dynamic complexities of these systems.

Today, most of the state-of-the-art methods aim to leverage deep learning for time-series modelling. In this article, we take a look at a few of the top works on deep learning base time series that have been published in the past couple of years.

Multivariate LSTM-FCNs

Year: 2018

The researchers transformed the univariate model, Long Short Term Memory Fully Convolutional Network (LSTM-FCN) and Attention-based variant–ALSTM-FCN), into a multivariate time series classification model. The proposed models work efficiently on various complex multivariate time series classification tasks such as activity recognition or action recognition. And, are highly efficient at test time and small enough to deploy on memory-constrained systems.

Recurrent Conditional GANs

Year: 2018

Recurrent GANs make use of recurrent neural networks in the generator and the discriminator. In the case of RCGANs, both of these RNNs are conditioned on auxiliary information. RCGANs can generate time-series data useful for supervised training, with only minor degradation in performance on real test data and can be used for medical time series data generation.

Deep Reinforcement Learning

Year: 2018

Deep Q-learning is investigated as an end-to-end solution to estimate the optimal strategies for acting on time series input. The univariate game tests whether the agent can capture the underlying dynamics, and the bivariate game tests whether the agent can utilise the hidden relation among the inputs. Stacked Gated Recurrent Unit (GRU), Long Short-Term Memory (LSTM) units, convolutional neural network (CNN), and multi-layer perceptron (MLP) are used to model Q values. The GRU-based agents show best overall performance in the Univariate game, while the MLP-based agents outperform others in the Bivariate game.

ROCKET

Year: 2019

The computation complexity of the state-of-the-art time series classification methods demands longer training times, even for smaller datasets. Existing methods focus on a single type of features, such as shape or frequency. In this method–ROCKET– the researchers demonstrated that simple linear classifiers with random convolutional kernels could attain state-of-the-art accuracy with a fraction of computation of existing methods.

MAD-GAN

Year: 2019

Instead of treating each data stream independently, the Multivariate anomaly detection method based on Generative Adversarial Networks (GANs) considers the entire variable set concurrently to capture the latent interactions amongst the variables. The generator and discriminator of the GAN model are fully exploited using a novel anomaly score called DR-score to detect anomalies by discrimination and reconstruction. According to the researchers, the proposed MAD-GAN was effective in reporting anomalies caused by various cyber-intrusions compared in these complex real-world systems.

Shallow RNN

Year: 2019

To induce long-term dependencies while enabling parallelisation, shallow RNNs were introduced. Its architecture runs several independent RNNs followed by a second layer that consumes the output of the first layer using a second RNN, thus capturing long dependencies. For time-series classification, this technique leads to substantially improved inference time over standard RNNs without compromising accuracy.

Temporal Fusion Transformers

Year: 2019

The Temporal Fusion Transformer (TFT) is a novel attention-based architecture, which has been designed for multi-horizon forecasting problems that often contain a complex mix of static (i.e. time-invariant) covariates, known future inputs, and other exogenous time series that are only observed historically. Previous deep learning solutions do not account for the full range of inputs present in common scenarios. TFT utilises recurrent layers for local processing and interpretable self-attention layers for learning long-term dependencies. The TFT also uses specialised components for the judicious selection of relevant features and a series of gating layers to suppress unnecessary components, enabling high performance in a wide range of regimes. 

N-BEATS

Year: 2020

N-BEATS  consists of backward and forward residual links and a very deep stack of fully-connected layers. The architecture allows the model to be interpretable, applicable without modification to a wide array of target domains, and fast to train. The first configuration of this model does not employ any time-series-specific components, and its performance on heterogeneous datasets strongly suggests that deep learning primitives such as residual blocks are sufficient to solve a wide range of forecasting problems.

DROCC

Year: 2020

Deep Robust One-Class Classification (DROCC) is based on the assumption that the points from the class of interest lie on a well-sampled, locally linear low dimensional manifold. DROCC is highly effective on tabular data, images (CIFAR and ImageNet), audio, and time-series, offering up to 20% increase in accuracy over the state-of-the-art in anomaly detection.

For a more comprehensive briefing on the state of deep learning-based time series applications, check this report.

Keep track of all latest developments in the Time-series domain by following paperswithcode.

Access all our open Survey & Awards Nomination forms in one place >>

Picture of Ram Sagar

Ram Sagar

I have a master's degree in Robotics and I write about machine learning advancements.

Download our Mobile App

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
Recent Stories