LSTM Model Architecture for Rare Event Time Series Forecasting

Time series forecasting with LSTMs directly has shown little success.

This is surprising as neural networks are known to be able to learn complex non-linear relationships and the LSTM is perhaps the most successful type of recurrent neural network that is capable of directly supporting multivariate sequence prediction problems.

A recent study performed at Uber AI Labs demonstrates how both the automatic feature learning capabilities of LSTMs and their ability to handle input sequences can be harnessed in an end-to-end model that can be used for drive demand forecasting for rare events like public holidays.

In this post, you will discover an approach to developing a scalable end-to-end LSTM model for time series forecasting.

After reading this post, you will know:

  • The challenge of multivariate, multi-step forecasting across multiple sites, in this case cities.
  • An LSTM model architecture for time series forecasting comprised of separate autoencoder and forecasting sub-models.
  • The skill of the proposed LSTM architecture at rare event demand forecasting and the ability to reuse the trained model on unrelated forecasting problems.

Discover how to build models for multivariate and multi-step time series forecasting with LSTMs and more in my new book, with 25 step-by-step tutorials and full source code.

Let’s get started.

Overview

In this post, we will review the 2017 paper titled “Time-series Extreme Event Forecasting with Neural Networks at Uber” by Nikolay Laptev, et al. presented at the Time Series Workshop, ICML 2017.

This post is divided into four sections; they are:

  1. Motivation
  2. Datasets
  3. Model
  4. Findings

Motivation

The goal of the work was to develop an end-to-end forecast model for multi-step time series forecasting that can handle multivariate inputs (e.g. multiple input time series).

The intent of the model was to forecast driver demand at Uber for ride sharing, specifically to forecast demand on challenging days such as holidays where the uncertainty for classical models was high.

Generally, this type of demand forecasting for holidays belongs to an area of study called extreme event prediction.

Extreme event prediction has become a popular topic for estimating peak electricity demand, traffic jam severity and surge pricing for ride sharing and other applications. In fact there is a branch of statistics known as extreme value theory (EVT) that deals directly with this challenge.

Time-series Extreme Event Forecasting with Neural Networks at Uber, 2017.

Two existing approaches were described:

  • Classical Forecasting Methods: Where a model was developed per time series, perhaps fit as needed.
  • Two-Step Approach: Where classical models were used in conjunction with machine learning models.

The difficulty of these existing models motivated the desire for a single end-to-end model.

Further, a model was required that could generalize across locales, specifically across data collected for each city. This means a model trained on some or all cities with data available and used to make forecasts across some or all cities.

We can summarize this as the general need for a model that supports multivariate inputs, makes multi-step forecasts, and generalizes across multiple sites, in this case cities.

Need help with Deep Learning for Time Series?

Take my free 7-day email crash course now (with sample code).

Click to sign-up and also get a free PDF Ebook version of the course.

Download Your FREE Mini-Course

Datasets

The model was fit in a propitiatory Uber dataset comprised of five years of anonymized ride sharing data across top cities in the US.

A five year daily history of completed trips across top US cities in terms of population was used to provide forecasts across all major US holidays.

Time-series Extreme Event Forecasting with Neural Networks at Uber, 2017.

The input to each forecast consisted of both the information about each ride, as well as weather, city, and holiday variables.

To circumvent the lack of data we use additional features including weather information (e.g., precipitation, wind speed, temperature) and city level information (e.g., current trips, current users, local holidays).

Time-series Extreme Event Forecasting with Neural Networks at Uber, 2017.

The figure below taken from the paper provides a sample of six variables for one year.

Scaled Multivariate Input for Model

Scaled Multivariate Input for Model
Taken from “Time-series Extreme Event Forecasting with Neural Networks at Uber”.

A training dataset was created by splitting the historical data into sliding windows of input and output variables.

The specific size of the look-back and forecast horizon used in the experiments were not specified in the paper.

Sliding Window Approach to Modeling Time Series

Sliding Window Approach to Modeling Time Series
Taken from “Time-series Extreme Event Forecasting with Neural Networks at Uber”.

Time series data was scaled by normalizing observations per batch of samples and each input series was de-trended, but not deseasonalized.

Neural networks are sensitive to unscaled data, therefore we normalize every minibatch. Furthermore, we found that de-trending the data, as opposed to de-seasoning, produces better results.

Time-series Extreme Event Forecasting with Neural Networks at Uber, 2017.

Model

LSTMs, e.g. Vanilla LSTMs, were evaluated on the problem and show relatively poor performance.

This is not surprising as it mirrors findings elsewhere.

Our initial LSTM implementation did not show superior performance relative to the state of the art approach.

Time-series Extreme Event Forecasting with Neural Networks at Uber, 2017.

A more elaborate architecture was used, comprised of two LSTM models:

  • Feature Extractor: Model for distilling an input sequence down to a feature vector that may be used as input for making a forecast.
  • Forecaster: Model that uses the extracted features and other inputs to make a forecast.

An LSTM autoencoder model was developed for use as the feature extraction model and a Stacked LSTM was used as the forecast model.

We found that the vanilla LSTM model’s performance is worse than our baseline. Thus, we propose a new architecture, that leverages an autoencoder for feature extraction, achieving superior performance compared to our baseline.

Time-series Extreme Event Forecasting with Neural Networks at Uber, 2017.

When making a forecast, time series data is first provided to the autoencoders, which is compressed to multiple feature vectors that are averaged and concatenated. The feature vectors are then provided as input to the forecast model in order to make a prediction.

… the model first primes the network by auto feature extraction, which is critical to capture complex time-series dynamics during special events at scale. […] Features vectors are then aggregated via an ensemble technique (e.g., averaging or other methods). The final vector is then concatenated with the new input and fed to LSTM forecaster for prediction.

Time-series Extreme Event Forecasting with Neural Networks at Uber, 2017.

It is not clear what exactly is provided to the autoencoder when making a prediction, although we may guess that it is a multivariate time series for the city being forecasted with observations prior to the interval being forecasted.

A multivariate time series as input to the autoencoder will result in multiple encoded vectors (one for each series) that could be concatenated. It is not clear what role averaging may take at this point, although we may guess that it is an averaging of multiple models performing the autoencoding process.

Overview of Feature Extraction Model and Forecast Model

Overview of Feature Extraction Model and Forecast Model
Taken from “Time-series Extreme Event Forecasting with Neural Networks at Uber.”

The authors comment that it would be possible to make the autoencoder a part of the forecast model, and that this was evaluated, but the separate model resulted in better performance.

Having a separate auto-encoder module, however, produced better results in our experience.

Time-series Extreme Event Forecasting with Neural Networks at Uber, 2017.

More details of the developed model were made available in the slides used when presenting the paper.

The input for the autoencoder was 512 LSTM units and the bottleneck in the autoencoder used to create the encoded feature vectors as 32 or 64 LSTM units.

Details of LSTM Autoencoder for Feature Extraction

Details of LSTM Autoencoder for Feature Extraction
Taken from “Time-series Extreme Event Forecasting with Neural Networks at Uber.”

The encoded feature vectors are provided to the forecast model with ‘new input‘, although it is not specified what this new input is; we could guess that it is a time series, perhaps a multivariate time series of the city being forecasted with observations prior to the forecast interval. Or, features extracted from this series as the blog post on the paper suggests (although I’m skeptical as the paper and slides contradict this).

The model was trained on a lot of data, which is a general requirement of stacked LSTMs or perhaps LSTMs in general.

The described production Neural Network Model was trained on thousands of time-series with thousands of data points each.

Time-series Extreme Event Forecasting with Neural Networks at Uber, 2017.

The model is not retrained when making new forecasts.

An interesting approach to estimating forecast uncertainty was also implemented that used the bootstrap.

It involved estimating model uncertainty and forecast uncertainty separately, using the autoencoder and the forecast model respectively. Inputs were provided to a given model and dropout of the activations (as commented in the slides) was used. This process was repeated 100 times, and the model and forecast error terms were used in an estimate of the forecast uncertainty.

Overview of Forecast Uncertainty Estimation

Overview of Forecast Uncertainty Estimation
Taken from “Time-series Extreme Event Forecasting with Neural Networks at Uber.”

This approach to forecast uncertainty may be better described in the 2017 paper “Deep and Confident Prediction for Time Series at Uber.”

Findings

The model was evaluated with a special focus on demand forecasting for U.S. holidays by U.S. city.

The specifics of the model evaluation were not specified.

The new generalized LSTM forecast model was found to outperform the existing model used at Uber, which may be impressive if we assume that the existing model was well tuned.

The results presented show a 2%-18% forecast accuracy improvement compared to the current proprietary method comprising a univariate timeseries and machine learned model.

Time-series Extreme Event Forecasting with Neural Networks at Uber, 2017.

The model trained on the Uber dataset was then applied directly to a subset of the M3-Competition dataset comprised of about 1,500 monthly univariate time series forecasting datasets.

This is a type of transfer learning, a highly-desirable goal that allows the reuse of deep learning models across problem domains.

Surprisingly, the model performed well, not great compared to the top performing methods, but better than many sophisticated models. The result is suggests that perhaps with fine tuning (e.g. as is done in other transfer learning case studies) the model could be reused and be skillful.

Performance of LSTM Model Trained on Uber Data and Evaluated on the M3 Datasets Taken from "Time-series Extreme Event Forecasting with Neural Networks at Uber."

Performance of LSTM Model Trained on Uber Data and Evaluated on the M3 Datasets
Taken from “Time-series Extreme Event Forecasting with Neural Networks at Uber.”

Importantly, the authors suggest that perhaps the most beneficial application of deep LSTM models to time series forecasting are situations where:

  • There are a large number of time series.
  • There are a large number of observations for each series.
  • There is a strong correlation between time series.

From our experience there are three criteria for picking a neural network model for time-series: (a) number of timeseries (b) length of time-series and (c) correlation among the time-series. If (a), (b) and (c) are high then the neural network might be the right choice, otherwise classical timeseries approach may work best.

Time-series Extreme Event Forecasting with Neural Networks at Uber, 2017.

This is summarized well by a slide used in the presentation of the paper.

Lessons Learned Applying LSTMs for Time Series Forecasting

Lessons Learned Applying LSTMs for Time Series Forecasting
Taken from “Time-series Extreme Event Forecasting with Neural Networks at Uber” Slides.

Further Reading

This section provides more resources on the topic if you are looking to go deeper.

Summary

In this post, you discovered a scalable end-to-end LSTM model for time series forecasting.

Specifically, you learned:

  • The challenge of multivariate, multi-step forecasting across multiple sites, in this case cities.
  • An LSTM model architecture for time series forecasting comprised of separate autoencoder and forecasting sub-models.
  • The skill of the proposed LSTM architecture at rare event demand forecasting and the ability to reuse the trained model on unrelated forecasting problems.

Do you have any questions?
Ask your questions in the comments below and I will do my best to answer.


Develop Deep Learning models for Time Series Today!

Deep Learning for Time Series Forecasting

Develop Your Own Forecasting models in Minutes

…with just a few lines of python code

Discover how in my new Ebook:
Deep Learning for Time Series Forecasting

It provides self-study tutorials on topics like: CNNs, LSTMs,
Multivariate Forecasting, Multi-Step Forecasting and much more…

Finally Bring Deep Learning to your Time Series Forecasting Projects

Skip the Academics. Just Results.

Click to learn more.


45 Responses to LSTM Model Architecture for Rare Event Time Series Forecasting

  1. Valentin Nagacevschi November 2, 2018 at 6:33 pm #

    Hi,

    Is there a way to identify and remove outliers from data sets without affecting rare events?
    Or how not to mistakenly have outliers as rare events ?
    Thanks

    Vali

    • Jason Brownlee November 3, 2018 at 7:01 am #

      You must carefully define what you mean by “outlier” and “rare event” so that the methods that detect the former don’t detect the latter.

      • Valentin Nagacevschi November 3, 2018 at 4:52 pm #

        Outliers usually are anomalies which are abnormal ie. outside a normal distribution. Something like mean+/-2*std. in a time series outliers are sparks, with much higher freq than the normal signal even with rare events.
        For instance a Black Friday is rare event but fits in the normal frequency whereas an outlier is much higher frequency.
        So how can I bring the frequency part in the equation?
        Thanks

        • Jason Brownlee November 4, 2018 at 6:25 am #

          Good question, I don’t have material on this topic so I can’t give you good off the cuff advice.

          I may cover the topic in the future.

        • Kevin Van Horn May 23, 2019 at 7:25 am #

          “Higher frequency” means more often. I think you mean that true outliers have a much *lower* frequency. But even that isn’t necessarily accurate. I’ve seen web traffic time series that have occasional spikes that correspond to no known event, occurring in some cases more commonly than the few known special events.

  2. Ian Downard November 7, 2018 at 4:45 am #

    Thanks for the post. Do you know where an implementation for this algorithm can be found?

  3. MANISH KUMAR November 17, 2018 at 12:01 pm #

    I don’t understood this paper as it includes terms like time series multivariate lstm recurrent model

  4. Juninho December 20, 2018 at 3:52 am #

    Hi,

    Thanks for this article. I’m trying to implement this paper using the Tensorflow low-level api.
    Can you explain more about the confident interval computation, please.

    I mean one you got the uncertainty error and the irreducible error, how can you get the interval through MC Dropout

    Thank a lot

    • Jason Brownlee December 20, 2018 at 6:30 am #

      Perhaps check the paper or contact the author of the paper, it has been months since I read the paper.

  5. sophia December 24, 2018 at 12:09 pm #

    very well explained, as always! a lot of your other articles contain code that help us understand the concepts better. I’m sure you’re very busy, but it’d be great if you could add code to this post, or point me to some articles/repos that have some code related to this post. Thanks,

  6. André January 9, 2019 at 1:35 pm #

    Hi Jason,

    I Master Degree student and I got interested in aply this approach in climate data series. I my research I have an adiction challenge (dimesions) the are latitude and longitude of an extrem or rare event.

    I created a time series downloading 10 years of ERA Interim, Daily: Pressure and sufarce data form ECMWF (https://apps.ecmwf.int/datasets/data/interim-full-daily/levtype=sfc/).

    So, as an example, I’m interested in predict an extreme rain (> 50mm in 24h) for a selected area (0.75 resolution): latitude from-18.75 to -20.25 and longitude from 315.0 to 316.5., It’s a grid 3 x 3 = 9 grids.

    The rain (total precipitation in mm) doesn’t have a gaussian distribution, so, there are a lot of 00mm days, and the time series of rain are not a continuos sequence.

    In your experience, this “Ubber” approach can fit despite of distribution problem? I have some doubts about the approach, like how this “LSTM Autoencoder for Feature Extraction” works. Do you expected do code a complete example like this “Uber” approach?

    • Jason Brownlee January 10, 2019 at 7:45 am #

      I don’t know how this approach will fair with your data, perhaps try it and see?

  7. Savan Gowda January 10, 2019 at 10:52 pm #

    Hi Jason,

    Thank you for the explanation of this paper.

    I have one question and maybe you could help me with that. The LSTM Autoencoder that I created looks like this —

    inputs = Input(shape=(n_steps, input_dim))
    encoder1 = CuDNNLSTM(128, return_sequences = True)(inputs)
    encoder2 = CuDNNLSTM(64, return_sequences = True)(encoder1)
    encoder3 = CuDNNLSTM(32)(encoder2)

    repeat = RepeatVector(10)(encoder3)

    decoder1 = CuDNNLSTM(32, return_sequences=True)(repeat)
    decoder2 = CuDNNLSTM(64, return_sequences=True)(decoder1)
    decoder3 = CuDNNLSTM(128, return_sequences=True)(decoder2)
    dense1 = TimeDistributed(Dense(100, activation=’relu’))(decoder3)
    dense2 = TimeDistributed(Dense(1))(dense1)

    sequence_autoencoder = Model(inputs, dense2)
    encoder_model = Model(inputs, repeat)

    Should we extract the feature from the “repeat” layer or the “encoder3” layer?

    Could you please give me a hint for plotting/visualization of the extracted features please?

    Thanks & Regards
    Savan

    • Jason Brownlee January 11, 2019 at 7:51 am #

      I’m eager to help, but I don’t have the capacity to debug your code, sorry.

      • Savan Gowda January 11, 2019 at 10:19 pm #

        Thank you for the answer Jason! You need not be sorry 🙂 Do you have any example code or could you suggest me some methods with which I can visualize the feature vectors?

        Thanks you 🙂

        • Jason Brownlee January 12, 2019 at 5:41 am #

          You can use a PCA to visualize high-dimensional vectors.

  8. manish February 9, 2019 at 5:34 pm #

    where to find the dataset for this paper of uber could you please send me
    ansd how to implement this

  9. MANISH KUMAR February 10, 2019 at 1:41 am #

    please send me dataset for this paper . i need this desperately for my research work please help me

  10. MANISH KUMAR February 18, 2019 at 4:32 pm #

    please provie me any downloaded file of data and how to implement it

  11. MANISH KUMAR May 11, 2019 at 3:28 pm #

    can you provide me thesis work related to this topic of rare events please help me.with implementation

  12. Ahmad May 19, 2019 at 9:57 pm #

    Hi Jason, thank you for the post. Please! what is the difference between Monte Carlo dropout and normal dropout? Do you have a link to any tutorial that shows how to add Monte Carlo dropout to the LSTM model implementation?

    Thank you!

    • Jason Brownlee May 20, 2019 at 6:29 am #

      What is monte carlo dropout?

      • Ahmad May 20, 2019 at 10:09 am #

        It is a stochastic dropout used as Bayesian approximation for model uncertainty estimation. It is equivalent to performing T stochastic forward passes through the Neural Network and averaging the result. It can also be approximated by averaging the weights of the NN (i.e.multiplying each weight by a probability p at test time). MC dropout s used for model uncertainty estimation in the paper you elaborated and the one you provided as reference (“Deep and Confident Prediction for Time Series at Uber”) in this post.

        • Jason Brownlee May 20, 2019 at 2:36 pm #

          Thanks!

          • Ahmad May 20, 2019 at 6:35 pm #

            Please! help me with any tutorial that shows how it can be implemented using the LSTM model

          • Jason Brownlee May 21, 2019 at 6:30 am #

            Thanks for the suggestion, I may be able to cover it in the future.

  13. Marco Cerliani May 22, 2019 at 4:57 pm #

    I made a post where I replicate these results. You can find the article here: https://towardsdatascience.com/extreme-event-forecasting-with-lstm-autoencoders-297492485037 (with Python Code)

  14. Marco May 25, 2019 at 5:11 am #

    Good job. But I must say that I’m sick of reading this incomplete paper. “new input” is something not specified clearly in any part of the paper. They publish a paper and they hide some details or made them obscure. What’s the point?

    • Jason Brownlee May 25, 2019 at 7:54 am #

      Papers are always incomplete, they are just enough to give you a rough idea – which might be enough.

      It’s a pain. And unless a paper has associated code it is almost fraud – they can make up anything.

      Thankfully, most good papers have associated github project – this never used to be the case.

  15. Parth June 30, 2019 at 11:24 pm #

    How many time series are sufficient enough for these network training? (Author suggest that more number of time series needed for these type of network to succeed, but how many?)

    Can you please give some number to have rough idea?
    Thanks for very insightful post!

  16. Nour Attaallah August 9, 2019 at 7:55 am #

    Is there a way to separate overlapped events in a time series trace ?

    • Jason Brownlee August 9, 2019 at 8:19 am #

      I guess it depends on the data.

      • Nouraldin Attaallah August 13, 2019 at 2:54 am #

        So each individual event in the trace has its unique duration and volume (y-value). An overlapped event will look like a block of stacked rectangular events. Any suggestions ? Thanks

        • Jason Brownlee August 13, 2019 at 6:14 am #

          Not off hand, some research may be required. Perhaps try some searches on scholar.google.com

          • Nouraldin Attaallah August 14, 2019 at 4:55 am #

            Okay
            Thanks 😀

Leave a Reply