[New Book] Click to get The Beginner's Guide to Data Science!
Use the offer code 20offearlybird to get 20% off. Hurry, sale ends soon!

Instability of Online Learning for Stateful LSTM for Time Series Forecasting

Some neural network configurations can result in an unstable model.

This can make them hard to characterize and compare to other model configurations on the same problem using descriptive statistics.

One good example of a seemingly unstable model is the use of online learning (a batch size of 1) for a stateful Long Short-Term Memory (LSTM) model.

In this tutorial, you will discover how to explore the results of a stateful LSTM fit using online learning on a standard time series forecasting problem.

After completing this tutorial, you will know:

  • How to design a robust test harness for evaluating LSTM models on time series forecasting problems.
  • How to analyze a population of results, including summary statistics, spread, and distribution of results.
  • How to analyze the impact of increasing the number of repeats for an experiment.

Kick-start your project with my new book Deep Learning for Time Series Forecasting, including step-by-step tutorials and the Python source code files for all examples.

Let’s get started.

  • Updated Apr/2019: Updated the link to dataset.
Instability of Online Learning for Stateful LSTM for Time Series Forecasting

Instability of Online Learning for Stateful LSTM for Time Series Forecasting
Photo by Magnus Brath, some rights reserved.

Model Instability

When you train the same network on the same data more than once, you may get very different results.

This is because neural networks are initialized randomly and the optimization nature of how they are fit to the training data can result in different final weights within the network. These different networks can in turn result in varied predictions given the same input data.

As a result, it is important to repeat any experiment on neural networks multiple times to find an averaged expected performance.

For more on the stochastic nature of machine learning algorithms like neural networks, see the post:

The batch size in a neural network defines how often the weights within the network are updated given exposure to a training dataset.

A batch size of 1 means that the network weights are updated after each single row of training data. This is called online learning. The result is a network that can learn quickly, but a configuration that can be quite unstable.

In this tutorial, we will explore the instability of online learning for a stateful LSTM configuration for time series forecasting.

We will explore this by looking at the average performance of an LSTM configuration on a standard time series forecasting problem over a variable number of repeats of the experiment.

That is, we will re-train the same model configuration on the same data many times and look at the performance of the model on a hold-out dataset and review how unstable the model can be.

Tutorial Overview

This tutorial is broken down into 6 parts. They are:

  1. Shampoo Sales Dataset
  2. Experimental Test Harness
  3. Code and Collect Results
  4. Basic Statistics on Results
  5. Repeats vs Test RMSE
  6. Review of Results

Environment

This tutorial assumes you have a Python SciPy environment installed. You can use either Python 2 or 3 with this example.

This tutorial assumes you have Keras v2.0 or higher installed with either the TensorFlow or Theano backend.

This tutorial also assumes you have scikit-learn, Pandas, NumPy, and Matplotlib installed.

Next, let’s take a look at a standard time series forecasting problem that we can use as context for this experiment.

If you need help setting up your Python environment, see this post:

Need help with Deep Learning for Time Series?

Take my free 7-day email crash course now (with sample code).

Click to sign-up and also get a free PDF Ebook version of the course.

Shampoo Sales Dataset

This dataset describes the monthly number of sales of shampoo over a 3-year period.

The units are a sales count and there are 36 observations. The original dataset is credited to Makridakis, Wheelwright, and Hyndman (1998).

The example below loads and creates a plot of the loaded dataset.

Running the example loads the dataset as a Pandas Series and prints the first 5 rows.

A line plot of the series is then created showing a clear increasing trend.

Line Plot of Shampoo Sales Dataset

Line Plot of Shampoo Sales Dataset

Next, we will take a look at the LSTM configuration and test harness used in the experiment.

Experimental Test Harness

This section describes the test harness used in this tutorial.

Data Split

We will split the Shampoo Sales dataset into two parts: a training and a test set.

The first two years of data will be taken for the training dataset and the remaining one year of data will be used for the test set.

Models will be developed using the training dataset and will make predictions on the test dataset.

The persistence forecast (naive forecast) on the test dataset achieves an error of 136.761 monthly shampoo sales. This provides a lower acceptable bound of performance on the test set.

Model Evaluation

A rolling-forecast scenario will be used, also called walk-forward model validation.

Each time step of the test dataset will be walked one at a time. A model will be used to make a forecast for the time step, then the actual expected value from the test set will be taken and made available to the model for the forecast on the next time step.

This mimics a real-world scenario where new Shampoo Sales observations would be available each month and used in the forecasting of the following month.

This will be simulated by the structure of the train and test datasets.

All forecasts on the test dataset will be collected and an error score calculated to summarize the skill of the model. The root mean squared error (RMSE) will be used as it punishes large errors and results in a score that is in the same units as the forecast data, namely monthly shampoo sales.

Data Preparation

Before we can fit an LSTM model to the dataset, we must transform the data.

The following three data transforms are performed on the dataset prior to fitting a model and making a forecast.

  1. Transform the time series data so that it is stationary. Specifically a lag=1 differencing to remove the increasing trend in the data.
  2. Transform the time series into a supervised learning problem. Specifically the organization of data into input and output patterns where the observation at the previous time step is used as an input to forecast the observation at the current time step
  3. Transform the observations to have a specific scale. Specifically, to rescale the data to values between -1 and 1 to meet the default hyperbolic tangent activation function of the LSTM model.

These transforms are inverted on forecasts to return them into their original scale before calculating and error score.

LSTM Model

We will use a base stateful LSTM model with 1 neuron fit for 1000 epochs.

A batch size of 1 is required as we will be using walk-forward validation and making one-step forecasts for each of the final 12 months of test data.

A batch size of 1 means that the model will be fit using online training (as opposed to batch training or mini-batch training). As a result, it is expected that the model fit will have some variance.

Ideally, more training epochs would be used (such as 1500), but this was truncated to 1000 to keep run times reasonable.

The model will be fit using the efficient ADAM optimization algorithm and the mean squared error loss function.

Experimental Runs

Each experimental scenario will be run 100 times and the RMSE score on the test set will be recorded from the end each run.

All test RMSE scores are written to file for later analysis.

Let’s dive into the experiments.

Code and Collect Results

The complete code listing is provided below.

It may take a few hours to run on modern hardware.

Running the experiment saves the RMSE scores of the fit model on the test dataset.

Results are saved to the file “experiment_stateful.csv“.

Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.

A truncated listing of the results is provided below.

Basic Statistics on Results

We can start off by calculating some basic statistics on the entire population of 100 test RMSE scores.

Generally, we expect machine learning results to have a Gaussian distribution. This allows us to report the mean and standard deviation of a model and indicate a confidence interval for the model when making predictions on unseen data.

The snippet below loads the result file and calculates some descriptive statistics.

Running the example prints descriptive statistics from the results.

Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.

We can see that on average, the configuration achieved an RMSE of about 107 monthly shampoo sales with a standard deviation of about 17.

We can also see that the best test RMSE observed was about 90 sales, whereas the worse was just under 200, which is quite a spread of scores.

To get a better idea of the spread of the data, a box and whisker plot is also created.

The plot shows the median (green line), middle 50% of the data (box), and outliers (dots). We can see quite a spread to the data towards poor RMSE scores.

Box and Whisker Plot of 100 Test RMSE Scores on the Shampoo Sales Dataset

Box and Whisker Plot of 100 Test RMSE Scores on the Shampoo Sales Dataset

A histogram of the raw result values is also created.

The plot suggests a skewed or even an exponential distribution with a mass around an RMSE of 100 and a long tail leading out towards an RMSE of 200.

The distribution of the results are clearly not Gaussian. This is unfortunate, as the mean and standard deviation cannot be used directly to estimate a confidence interval for the model (e.g. 95% confidence as 2x the standard deviation around the mean).

The skewed distribution also highlights that the median (50th percentile) would be a better central tendency to use instead of the mean for these results. The median should be more robust to outlier results than the mean.

Histogram of Test RMSE Scores on Shampoo Sales Dataset

Histogram of Test RMSE Scores on Shampoo Sales Dataset

Repeats vs Test RMSE

We can start to look at how the summary statistics for the experiment change as the number of repeats is increased from 1 to 100.

We can accumulate the test RMSE scores and calculate descriptive statistics. For example, the score from one repeat, the scores from the first and second repeats, the scores from the first 3 repeats, and so on to 100 repeats.

We can review how the central tendency changes as the number of repeats is increased as a line plot. We’ll look at both the mean and median.

Generally, we would expect that as the number of repeats of the experiment is increased, the distribution would increasingly better match the underlying distribution, including the central tendency, such as the mean.

The complete code listing is provided below.

The cumulative size of the distribution, mean, and median is printed as the number of repeats is increased.

Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.

A truncated output is listed below.

A line plot is also created showing how the mean and median change as the number of repeats is increased.

The results show that the mean is more influenced by outlier results than the median, as expected.

We can see that the median appears quite stable down around 99-100. This jumps to 102 towards the end of the plot suggesting a string of worse RMSE scores at later repeats.

Line Plots of Mean and Median Test RMSE vs Number of Repeats

Line Plots of Mean and Median Test RMSE vs Number of Repeats

Review of Results

We made some useful observations from 100 repeats of a stateful LSTM on a standard time series forecasting problem.

Specifically:

  • We observed that the distribution of results is not Gaussian. It may be a skewed Gaussian or an exponential distribution with a long tail and outliers.
  • We observed that the distribution of results did not stabilize with the increase of repeats from 1 to 100.

The observations suggest a few important properties:

  • The choice of online learning for the LSTM and problem results in a relatively unstable model.
  • The chosen number of repeats (100) may not be sufficient to characterize the behavior of the model.

This is a useful finding as it would be a mistake to make strong conclusions about the model from 100 or fewer repeats of the experiment.

This is an important caution to consider when describing your own machine learning results.

This suggests some extensions to this experiment, such as:

  • Explore the impact of the number of repeats on a more stable model, such as one using batch or mini-batch learning.
  • Increase the number of repeats to thousands or more in an attempt to account for the general instability of the model with online learning.

Summary

In this tutorial, you discovered how to analyze experimental results from LSTM models fit using online learning.

You learned:

  • How to design a robust test harness for evaluating LSTM models on time series forecast problems.
  • How to analyze experimental results, including summary statistics.
  • How to analyze the impact of increasing the number of experiment repeats and how to identify an unstable model.

Do you have any questions?
Ask your questions in the comments below and I will do my best to answer.

Develop Deep Learning models for Time Series Today!

Deep Learning for Time Series Forecasting

Develop Your Own Forecasting models in Minutes

...with just a few lines of python code

Discover how in my new Ebook:
Deep Learning for Time Series Forecasting

It provides self-study tutorials on topics like:
CNNs, LSTMs, Multivariate Forecasting, Multi-Step Forecasting and much more...

Finally Bring Deep Learning to your Time Series Forecasting Projects

Skip the Academics. Just Results.

See What's Inside

33 Responses to Instability of Online Learning for Stateful LSTM for Time Series Forecasting

  1. Avatar
    Cao May 31, 2017 at 8:48 pm #

    Dear Jason,

    always thank you for your posts, which are really helpful.

    i am a beginner of machine learning, and i stuck in some questions for a longtime.

    1.what is the online learning? batch size=1 is online and dynamic?

    2.how to set a lstm structure, which can dynamic add new data every minutes, and update the weight and bias or something else. then it can predict every next minute data.

    thank you again, jason.

    • Avatar
      Jason Brownlee June 2, 2017 at 12:47 pm #

      Online learning means the model is updated after each training pattern.

      The structure does not change. You need to find a structure that achieves good results on your problem.

  2. Avatar
    Norman March 16, 2018 at 7:00 am #

    Dear Jason,

    thank you for sharing your knowledge!
    Is your LSTM stateful?
    To my mind you have to set stateful=True in line 66 or does this happen automatically with batchsize=1?

    Best regards

    • Avatar
      Norman March 16, 2018 at 8:09 am #

      I’m sorry I didn’t see the stateful=True

    • Avatar
      Jason Brownlee March 16, 2018 at 2:22 pm #

      Yes, it means that state is only reset when done so explicitly.

  3. Avatar
    Mathias A March 24, 2018 at 2:16 am #

    I have seen you use this structure a lot:

    for i in range(nb_epoch):
    model.fit(X, y, epochs=1, batch_size=batch_size, verbose=0, shuffle=False)
    model.reset_states()

    I found out that callbacks in keras can be used instead of wrapping fit() in a loop:

    # define callback class
    class ModelStateReset(keras.callbacks.Callback):
    def on_epoch_end(self, epoch, logs={}):
    self.model.reset_states()
    reset = ModelStateReset()

    # use reset callback instance in fit
    model.fit(x, y, epochs=i, batch_size=batch_size, shuffle=False, callbacks=[reset])

    Just a heads up if you missed this. I personally prefer using fit()’s own parameters over the loop 😉

  4. Avatar
    Carlos B April 8, 2018 at 12:11 am #

    Thank you for this post, Jason. You have cleared up my confusion regarding the results from my initial LSTM tests for my research. Are all LSTMs stochastic by nature, because of the random weights initialisation?

  5. Avatar
    Harsh June 28, 2018 at 12:05 am #

    HI Jason,
    Thanks for your wonderful posts!

    I am working with time series data and am using lstms for that. I am doing online learning(batch_size=1) with stateful lstms for a univariate series and considering timesteps=200.

    This is how the training is being done:

    for i in range(NUM_EPOCHS):
    print(“Epoch {:d}/{:d}”.format(i+1, NUM_EPOCHS))
    model.fit(Xtrain, Ytrain, batch_size=BATCH_SIZE, epochs=1, verbose =1, callbacks=callbacks_list, shuffle=False)
    model.reset_states()

    where callbacks list is just for early stopping and reducing LR on plateau.

    And for predicting:

    predictions = model.predict(Xtest,batch_size=BATCH_SIZE)

    Since I am doing online learning, so when the new data comes (lets say every hour), I don’t want to retrain the model from scratch. The idea would be to just update the trained model with new data and do the prediction for the new data. So, do I just save the weights after fitting the training data and load the saved weights before predicting? Would that be enough? Will it also save the state as I am using a stateful lstm? Could you please tell me if I am thinking it in the right way or suggest something else?

    • Avatar
      Jason Brownlee June 28, 2018 at 6:22 am #

      Yes, you can update the weights with new data.

      Generally, I would encourage you to try other methods as I have found LSTMs to be very poor at time series forecasting compared to other methods.

      • Avatar
        Anas July 11, 2018 at 11:24 pm #

        Hi Jason,
        Thank you for your post.
        Could you suggest me some of these methods for time series forecasting?

        • Avatar
          Jason Brownlee July 12, 2018 at 6:26 am #

          Yes, try a suite of classical methods (SARIMA, ETC, etc.), a suite of ML methods (linear and nonlinear) and a suite of deep learning methods (mlp, cnn, lstm, etc.)

          • Avatar
            Anas July 13, 2018 at 5:25 pm #

            Thank you Jason for your answer.

    • Avatar
      Narendran Raghavan April 10, 2019 at 4:17 am #

      Hi Harsh, I am doing a similar problem as yours in online time series forecasting using LSTMs. I wanted your input and help in my problem. Is there a good way to contact you? Thanks, Narendran

  6. Avatar
    Harsh June 28, 2018 at 10:17 pm #

    I already implemented ARIMA but it’s too slow as it needs to be retrained everytime new data comes in. And finding the values of p and q using grid search consumes a lot of time.

    That’s why I switched to stateful LSTMs and surprisingly I am getting better results with LSTM and is faster as well.

    I have few more questions:

    1. I am getting different predictions everytime I run the model. Do you know how can we get consistent results everytime? I saw you have used repeats and taking the average for rmse but I want to know if there is a way to get the same predictions.

    2. Once I save the model weights after fitting the training data and load the weights before predicting, would it also load the state that existed when we saved the weights? Do you have an example where you are doing more than a basic LSTM example e.g. how one would implement in a real time scenario(end-to-end model).

  7. Avatar
    Gabriel Mouzella Silva September 30, 2018 at 5:28 am #

    Hi Jason,

    could an already trained stable network be retrained online and still be stable? once the weights will not be random, and it would be something like doing a transfer learning.

    I ask that because i’m working with an already trained LSTM, however i’m facing the concept drift problem and i’m forced to retrain my model every now and then so the forecast remains good enough for my aplication

  8. Avatar
    Sus November 19, 2018 at 8:00 am #

    Hi Jason,

    Thanks for the great tutorials.

    Two questions:

    – From the tutorial I assume a solution for this problem is increasing the batch size? Do you have any tutorial from a stateful LSTM network using batch size > 1?-

    – Is it possible using the early stopping callback here?

    – How about transforming the data to Gaussian (as mentioned here: https://machinelearningmastery.com/how-to-transform-data-to-fit-the-normal-distribution/)?

    Thanks in advance!!!

  9. Avatar
    Mitchel Offiong September 14, 2019 at 12:35 am #

    Hello Brownlee,

    Thank you for this heads up. I am interested in online training and prediction for real time time-series data.

    If your book contains it, i will like to to have and possibly a step by step guy as i am relatively new to the weight update concept of real time prediction. Thanks

    • Avatar
      Jason Brownlee September 14, 2019 at 6:20 am #

      What do you mean by real time?

      Do you mean refitting a model after each new observation?

      Or do you mean simply making predictions as needed?

      • Avatar
        James Bowery January 9, 2020 at 9:32 am #

        “Realtime” in the context of a blog post about “Online” means both predicting each new observation and refitting the model after each new observation.

        • Avatar
          Jason Brownlee January 9, 2020 at 1:49 pm #

          I don’t really have much on updating models with new samples.

          Generally, it sounds straightforward, as long as you verify with controlled experiments that your chosen update schedule results in skillful models. No point in updating the model (weights) if it does not improve the performance.

  10. Avatar
    FrederikB February 7, 2020 at 3:14 am #

    Hi Jason, been learning alot from your posts, thanx!!

    On this topic: I have a few local weather sensors that I get data from every 10min, and I’d like to try and test what sort of outcome I can get from incremental learning on these new data coming in vs pre-trained model.

    1 – What I’m thinking is appending the new data onto the big dataset and then using slice to sample a prediction set equal to my timesteps to make a prediction 1 step into the future only.(Chose 10min just because I’ll get more data regularly to test with) And then display that in the app.

    2 – After the previous step I create a single train set: X[-60 :-1 , : ] and y[-1,0] from the main dataset (timeSteps = 60) that I fit to the model with Batch_size=1 to get the latest updates.

    I’ll be using a stacked LSTM.

    I’m busy designing /figuring out how to design it, but I’d like realistic feedback before I go to far, or if you have any suggestion how else to approach this?
    Much appreciated.

  11. Avatar
    Andrey August 26, 2020 at 8:15 am #

    Hi Jason, thanks for sharing this valuable information. I appreciate if you respond at my question. For my project i divided my data in train, validation and test sets. I wonder about what to do after select the structure that works better to my problem (i did this using the validation set), should i’ve re-train the model including train and validation data? or there’s a way to update the model including the validation data before prediction on test set?. This is on my mind because wheni re-train the model with the structure selected, i feel that the new data affect the results that i obtained before,

    Thanks a lot for your blog, it was very helpfull for the purpose of my project.

    Regards!

    • Avatar
      Jason Brownlee August 26, 2020 at 1:43 pm #

      You’re welcome.

      Good question, ideally you want to train a final model with all available data if you can.

Leave a Reply