[New Book] Click to get The Beginner's Guide to Data Science!
Use the offer code 20offearlybird to get 20% off. Hurry, sale ends soon!

How to Seed State for LSTMs for Time Series Forecasting in Python

Long Short-Term Memory networks, or LSTMs, are a powerful type of recurrent neural network capable of learning long sequences of observations.

A promise of LSTMs is that they may be effective at time series forecasting, although the method is known to be difficult to configure and use for these purposes.

A key feature of LSTMs is that they maintain an internal state that can aid in the forecasting. This raises the question of how best to seed the state of a fit LSTM model prior to making a forecast.

In this tutorial, you will discover how to design, execute, and interpret the results from an experiment to explore whether it is better to seed the state of a fit LSTM from the training dataset or to use no prior state.

After completing this tutorial, you will know:

  • About the open question of how to best initialize the state of a fit LSTM for forecasting.
  • How to develop a robust test harness for evaluating LSTM models on univariate time series forecasting problems.
  • How to determine whether or not seeding the state of your LSTM prior to forecasting is a good idea on your time series forecasting problem.

Kick-start your project with my new book Deep Learning for Time Series Forecasting, including step-by-step tutorials and the Python source code files for all examples.

Let’s get started.

  • Updated Apr/2019: Updated the link to dataset.
How to Seed State for LSTMs for Time Series Forecasting in Python

How to Seed State for LSTMs for Time Series Forecasting in Python
Photo by Tony Hisgett, some rights reserved.

Tutorial Overview

This tutorial is broken down into 5 parts; they are:

  1. Seeding LSTM State
  2. Shampoo Sales Dataset
  3. LSTM Model and Test Harness
  4. Code Listing
  5. Experimental Results

Environment

This tutorial assumes you have a Python SciPy environment installed. You can use either Python 2 or 3 with this example.

You must have Keras (version 2.0 or higher) installed with either the TensorFlow or Theano backend.

The tutorial also assumes you have scikit-learn, Pandas, NumPy, and Matplotlib installed.

If you need help setting up your Python environment, see this post:

Seeding LSTM State

When using stateless LSTMs in Keras, you have fine-grained control over when the internal state of the model is cleared.

This is achieved using the model.reset_states() function.

When training a stateful LSTM, it is important to clear the state of the model between training epochs. This is so that the state built up during training over the epoch is commensurate with the sequence of observations in the epoch.

Given that we have this fine-grained control, there is a question as to whether or not and how to initialize the state of the LSTM prior to making a forecast.

The options are:

  • Reset state prior to forecasting.
  • Initialize state with the training datasets prior to forecasting.

It is assumed that initializing the state of the model using the training data would be superior, but this needs to be confirmed with experimentation.

Additionally, there may be multiple ways to seed this state; for example:

  • Complete a training epoch, including weight updates. For example, do not reset at the end of the last training epoch.
  • Complete a forecast of the training data.

Generally, it is believed that both of these approaches would be somewhat equivalent. The latter of forecasting the training dataset is preferred because it does not require any modification to network weights and could be a repeatable procedure for an immutable network saved to file.

In this tutorial, we will consider the difference between:

  • Forecasting a test dataset using a fit LSTM with no state (e.g. after a reset).
  • Forecasting a test dataset with a fit LSTM with state after having forecast the training dataset.

Next, let’s take a look at a standard time series dataset we will use in this experiment.

Shampoo Sales Dataset

This dataset describes the monthly number of sales of shampoo over a 3-year period.

The units are a sales count and there are 36 observations. The original dataset is credited to Makridakis, Wheelwright, and Hyndman (1998).

The example below loads and creates a plot of the loaded dataset.

Running the example loads the dataset as a Pandas Series and prints the first 5 rows.

A line plot of the series is then created showing a clear increasing trend.

Line Plot of Shampoo Sales Dataset

Line Plot of Shampoo Sales Dataset

Next, we will take a look at the LSTM configuration and test harness used in the experiment.

Need help with Deep Learning for Time Series?

Take my free 7-day email crash course now (with sample code).

Click to sign-up and also get a free PDF Ebook version of the course.

LSTM Model and Test Harness

Data Split

We will split the Shampoo Sales dataset into two parts: a training and a test set.

The first two years of data will be taken for the training dataset and the remaining one year of data will be used for the test set.

Models will be developed using the training dataset and will make predictions on the test dataset.

Model Evaluation

A rolling-forecast scenario will be used, also called walk-forward model validation.

Each time step of the test dataset will be walked one at a time. A model will be used to make a forecast for the time step, then the actual expected value from the test set will be taken and made available to the model for the forecast on the next time step.

This mimics a real-world scenario where new Shampoo Sales observations would be available each month and used in the forecasting of the following month.

This will be simulated by the structure of the train and test datasets. We will make all of the forecasts in a one-shot method.

All forecasts on the test dataset will be collected and an error score calculated to summarize the skill of the model. The root mean squared error (RMSE) will be used as it punishes large errors and results in a score that is in the same units as the forecast data, namely monthly shampoo sales.

Data Preparation

Before we can fit an LSTM model to the dataset, we must transform the data.

The following three data transforms are performed on the dataset prior to fitting a model and making a forecast.

  1. Transform the time series data so that it is stationary. Specifically a lag=1 differencing to remove the increasing trend in the data.
  2. Transform the time series into a supervised learning problem. Specifically the organization of data into input and output patterns where the observation at the previous time step is used as an input to forecast the observation at the current timestep.
  3. Transform the observations to have a specific scale. Specifically, to rescale the data to values between -1 and 1 to meet the default hyperbolic tangent activation function of the LSTM model.

LSTM Model

An LSTM model configuration will be used that is skillful but untuned.

This means that the model will be fit to the data and will be able to make meaningful forecasts, but will not be the optimal model for the dataset.

The network topology consists of 1 input, a hidden layer with 4 units, and an output layer with 1 output value.

The model will be fit for 3,000 epochs with a batch size of 4. The training dataset will be reduced to 20 observations after data preparation. This is so that the batch size evenly divides into both the training dataset and the test dataset (a requirement).

Experimental Run

Each scenario will be run 30 times.

This means that 30 models will be created and evaluated for each scenario. The RMSE from each run will be collected providing a population of results that can be summarized using descriptive statistics like the mean and standard deviation.

This is required because neural networks like the LSTM are influenced by their initial conditions (e.g. their initial random weights).

The mean results for each scenario will allow us to interpret the average behavior of each scenario and how they compare.

Let’s dive into the results.

Code Listing

Key modular behaviors were separated into functions for readability and testability, in case you would like to reuse this experimental setup.

The specifics of the scenarios are described in the experiment() function.

The complete code listing is provided below.

Experimental Results

Running the experiment takes some time or CPU or GPU hardware.

The RMSE of each run is printed to give an idea of progress.

At the end of the run, the summary statistics are calculated and printed for each scenario, including the mean and standard deviation.

Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.

The complete output is listed below.

A box and whisker plot is also created and saved to file, shown below.

Box and Whisker Plot of LSTM with and Without Seed of State

Box and Whisker Plot of LSTM with and Without Seed of State

The results are surprising.

They suggest better results by not seeding the state of the LSTM prior to forecasting the test dataset.

This can be seen by the lower on average error of 146.600505 monthly shampoo sales compared to 186.432143 with seeding. It is much clearer in the box and whisker plot of the distributions.

Perhaps the chosen model configuration resulted in a model too small to be dependent on the sequence and internal state to benefit from seeding prior to forecasting. Perhaps larger experiments are required.

Extensions

The surprising results open the door to further experimentation.

  • Evaluate the effect of clearing vs not clearing the state after the end of the last training epoch.
  • Evaluate the effect of predicting the training and test sets all at once vs one time step at a time.
  • Evaluate the effect of resetting and not resetting the LSTM state at the end of each epoch.

Did you try one of these extensions? Share your findings in the comments below.

Summary

In this tutorial, you discovered how to experimentally determine the best way to seed the state of an LSTM model on a univariate time series forecasting problem.

Specifically, you learned:

  • About the problem of seeding the state of an LSTM prior to forecasting and ways to address it.
  • How to develop a robust test harness for evaluating LSTM models for time series forecasting.
  • How to determine whether or not to seed the state of an LSTM model with the training data prior to forecasting.

Did you run the experiment or run a modified version of the experiment?
Share your results in the comments; I’d love to see them.

Do you have any questions about this post?
Ask your questions in the comment below and I will do my best to answer.

Develop Deep Learning models for Time Series Today!

Deep Learning for Time Series Forecasting

Develop Your Own Forecasting models in Minutes

...with just a few lines of python code

Discover how in my new Ebook:
Deep Learning for Time Series Forecasting

It provides self-study tutorials on topics like:
CNNs, LSTMs, Multivariate Forecasting, Multi-Step Forecasting and much more...

Finally Bring Deep Learning to your Time Series Forecasting Projects

Skip the Academics. Just Results.

See What's Inside

25 Responses to How to Seed State for LSTMs for Time Series Forecasting in Python

  1. Avatar
    andrew April 10, 2017 at 7:59 pm #

    Great article!! was wondering is this going to be extended for the multivariate case?
    Many thanks,
    Best,
    Andrew

    • Avatar
      Jason Brownlee April 11, 2017 at 9:32 am #

      Thanks Andrew, yes I will have posts on the multivariate case soon.

  2. Avatar
    Klaas Brau April 11, 2017 at 4:50 am #

    Thanks Jason.
    received the following error

    ValueError: In a stateful network, you should only pass inputs with a number of samples that can be divided by the batch size. Found: 19 samples

    only changed epochs to nb_epoch=1 in the model.fit loop because otherwise i receive

    TypeError: Received unknown keyword arguments: {‘epochs’: 1}

    any suggestions?
    Thanks

    • Avatar
      Jason Brownlee April 11, 2017 at 9:36 am #

      It looks like you need to update your Keras to version 2.0 or higher.

    • Avatar
      huang August 8, 2017 at 1:35 am #

      It’s a magical article. Get lots of from this article.

      I’m freshman in deeplearning and keras, and i met the same problem. I notice that the length of variable “train_scaled” is 21,

      train_trimmed = train_scaled[2:, :]

      and the author made the train_trimmed begin index 2, so I tried to change to

      train_trimmed = train_scaled[1:, :]

      maybe it could work.

  3. Avatar
    leslie April 12, 2017 at 4:17 pm #

    Thanks for the article.But I was wondering where is the shampoo-sales.csv..

  4. Avatar
    Dan April 15, 2017 at 4:00 pm #

    istalled the latest theano and tensorflow version and received the following error. Any ideas? Thank you.

    Traceback (most recent call last):
    File “C:/Users/Myamoto/PycharmProjects/01.04.17_SentdexTensorflowWHD/errorfindentensorundkeras.py”, line 143, in
    with_seed = experiment(repeats, series, True)
    File “C:/Users/Myamoto/PycharmProjects/01.04.17_SentdexTensorflowWHD/errorfindentensorundkeras.py”, line 111, in experiment
    lstm_model = fit_lstm(train_trimmed, batch_size, 3000, 4)
    File “C:/Users/Myamoto/PycharmProjects/01.04.17_SentdexTensorflowWHD/errorfindentensorundkeras.py”, line 81, in fit_lstm
    model.fit(X, y, epochs=1, batch_size=batch_size, verbose=0, shuffle=False)
    File “C:\Users\Myamoto\Anaconda3\lib\site-packages\keras\models.py”, line 853, in fit
    initial_epoch=initial_epoch)
    File “C:\Users\Myamoto\Anaconda3\lib\site-packages\keras\engine\training.py”, line 1406, in fit
    batch_size=batch_size)
    File “C:\Users\Myamoto\Anaconda3\lib\site-packages\keras\engine\training.py”, line 1318, in _standardize_user_data
    str(x[0].shape[0]) + ‘ samples’)
    ValueError: In a stateful network, you should only pass inputs with a number of samples that can be divided by the batch size. Found: 19 samples

    • Avatar
      Jason Brownlee April 16, 2017 at 9:27 am #

      Yes, you need to split your data up differently or change the batch size.

  5. Avatar
    Hans July 5, 2017 at 4:18 pm #

    Thank you Jason for this tutorial.

    The code above shows me that ‘without-seed’ is the way to go, with my own raw_data too.

    I was also able to feed the model with individual hyper parameters like batch size, neurons, epchochs etc. from a bigger batch environment.

    And now my personal never ending obstacle.

    If I reduce the repeats to one and say…

    # split data into train and test-sets
    instead of:
    train, test = supervised_values[0:-12], supervised_values[-12:]

    train, test = supervised_values[0:-1], supervised_values[-1:]

    …I get a ‘real live prediction’ via the fitted model, of the last data value contained in my own raw data.

    But I need a one step ahead forecast of unseen data.

    I was able to do this with some other models, but never succeeded with the LSTM model.

    How would you complement this code to make a a one step ahead prediction of unseen data?

    • Avatar
      Jason Brownlee July 6, 2017 at 10:23 am #

      Do not use this setup to make predictions. This setup is for evaluating model skill.

      Fit the model on all data, then call model.predict(x) where x is the input required to make an out of sample prediction.

      • Avatar
        Hans July 8, 2017 at 6:12 am #

        Is there an example how to finalize a LSTM-Model, including a one step ahead prediction?

        • Avatar
          Hans July 8, 2017 at 6:31 pm #

          Actually I thought I could use the functions fit_lstm and forecast_lstm standalone. Why not to do so?

          • Avatar
            Hans July 8, 2017 at 8:22 pm #

            I have:

            unseenPredict = forecast_lstm(classRefSample.lstm_model, classRefSample.batch_size, X)

            While storing a trained model and parameters from the experiment in an external class object.

            My Problem is the structure of X.

            Namely how to involve test_reshaped, test_scaled, invert_scale and inverse_difference, standalone, with a last known observation from the test partition part- which only consists of one data row.

            Is there a guide how to use this methods standalone?

  6. Avatar
    Hans July 9, 2017 at 1:04 am #

    How can we convert test_reshaped to it’s original value?

  7. Avatar
    Slawek August 24, 2017 at 9:00 am #

    Hi,
    You have only one time series. What is he meaning of batch size>1?

    • Avatar
      Jason Brownlee August 24, 2017 at 4:25 pm #

      If you have one sample and batch size is > 1, then it has no effect. The batch size will be 1 (as far as I know).

  8. Avatar
    Simone August 29, 2017 at 8:43 am #

    Thanks for this useful post,
    Is it possible seed the network using fit() (on the training set, without resetting the state) instead of using predict()?

  9. Avatar
    Zhaoyang Liu December 24, 2017 at 4:53 pm #

    when set the LSTM parameter is stateful, whether the batch size needs to be 1 when predicting univariate time series

  10. Avatar
    hamd March 6, 2019 at 7:04 pm #

    why it gives me bad results when increasing slit data percentage.
    the results should be improved?

    • Avatar
      Jason Brownlee March 7, 2019 at 6:45 am #

      Perhaps the model has less data to train on, or the estimate of error is now more accurate?

  11. Avatar
    lingyaw April 8, 2019 at 9:32 pm #

    I really love your great articles! I found almost everything I need. Thank you so much!

    I would like to use LSTM to solve the sequential data prediction problem (Multi-step forecasting/sequential to sequential ?).

    It is a very simple case and has only 1 sample. The history data has 1-30 observations in sequence (one observation(value) at one step).
    At the 10th step index, 10 observations (values) in sequence, predict the value at the 30th step index.
    At the 11th step, 11 observations in sequence (a new observation), predict the value at the 30th step…..

    Is it a seq2seq problem or multi-step forecasting?..

    As you mentioned, we should Tune the LSTM or seed State to find a good LSTM model before making prediction. I bought the ebook ” long short term memory networks with python” and “deep learning time series forecasting”. But I didn’t find a complete code for Muti-step forecasting/seq2seq that contains Tune LSTM model, Seed state before forecasting and update model when a new observation is available ( all processes to make relatively accurate prediction).

    I wonder if you have such an example somewhere (blog or ebook), but I didn’t find it…..

    Thank you so much!

Leave a Reply