Stateful and Stateless LSTM for Time Series Forecasting with Python

Last Updated on

The Keras Python deep learning library supports both stateful and stateless Long Short-Term Memory (LSTM) networks.

When using stateful LSTM networks, we have fine-grained control over when the internal state of the LSTM network is reset. Therefore, it is important to understand different ways of managing this internal state when fitting and making predictions with LSTM networks affect the skill of the network.

In this tutorial, you will explore the performance of stateful and stateless LSTM networks in Keras for time series forecasting.

After completing this tutorial, you will know:

  • How to compare and contrast stateful and stateless LSTM networks for time series forecasts.
  • How the batch size in stateless LSTMs relate to stateful LSTM networks.
  • How to evaluate and compare different state resetting regimes for stateful LSTM networks.

Discover how to build models for multivariate and multi-step time series forecasting with LSTMs and more in my new book, with 25 step-by-step tutorials and full source code.

Let’s get started.

  • Updated Apr/2019: Updated the link to dataset.
Stateful and Stateless LSTM for Time Series Forecasting with Python

Stateful and Stateless LSTM for Time Series Forecasting with Python
Photo by m01229, some rights reserved.

Tutorial Overview

This tutorial is broken down into 7 parts. They are:

  1. Shampoo Sales Dataset
  2. Experimental Test Harness
  3. A vs A Test
  4. Stateful vs Stateless
  5. Stateless With Large Batch vs Stateless
  6. Stateful Resetting vs Stateless
  7. Review of Findings

Environment

This tutorial assumes you have a Python SciPy environment installed. You can use either Python 2 or 3 with this example.

This tutorial assumes you have Keras v2.0 or higher installed with either the TensorFlow or Theano backend.

This tutorial also assumes you have scikit-learn, Pandas, NumPy, and Matplotlib installed.

If you need help setting up your Python environment, see this post:

Need help with Deep Learning for Time Series?

Take my free 7-day email crash course now (with sample code).

Click to sign-up and also get a free PDF Ebook version of the course.

Download Your FREE Mini-Course

Shampoo Sales Dataset

This dataset describes the monthly number of sales of shampoo over a 3-year period.

The units are a sales count and there are 36 observations. The original dataset is credited to Makridakis, Wheelwright, and Hyndman (1998).

The example below loads and creates a plot of the loaded dataset.

Running the example loads the dataset as a Pandas Series and prints the first 5 rows.

A line plot of the series is then created showing a clear increasing trend.

Line Plot of Shampoo Sales Dataset

Line Plot of Shampoo Sales Dataset

Next, we will take a look at the LSTM configuration and test harness used in the experiment.

Experimental Test Harness

This section describes the test harness used in this tutorial.

Data Split

We will split the Shampoo Sales dataset into two parts: a training and a test set.

The first two years of data will be taken for the training dataset and the remaining one year of data will be used for the test set.

Models will be developed using the training dataset and will make predictions on the test dataset.

The persistence forecast (naive forecast) on the test dataset achieves an error of 136.761 monthly shampoo sales. This provides a lower acceptable bound of performance on the test set.

Model Evaluation

A rolling-forecast scenario will be used, also called walk-forward model validation.

Each time step of the test dataset will be walked one at a time. A model will be used to make a forecast for the time step, then the actual expected value from the test set will be taken and made available to the model for the forecast on the next time step.

This mimics a real-world scenario where new Shampoo Sales observations would be available each month and used in the forecasting of the following month.

This will be simulated by the structure of the train and test datasets.

All forecasts on the test dataset will be collected and an error score calculated to summarize the skill of the model. The root mean squared error (RMSE) will be used as it punishes large errors and results in a score that is in the same units as the forecast data, namely monthly shampoo sales.

Data Preparation

Before we can fit an LSTM model to the dataset, we must transform the data.

The following three data transforms are performed on the dataset prior to fitting a model and making a forecast.

  1. Transform the time series data so that it is stationary. Specifically, a lag=1 differencing to remove the increasing trend in the data.
  2. Transform the time series into a supervised learning problem. Specifically, the organization of data into input and output patterns where the observation at the previous time step is used as an input to forecast the observation at the current time step
  3. Transform the observations to have a specific scale. Specifically, to rescale the data to values between -1 and 1 to meet the default hyperbolic tangent activation function of the LSTM model.

These transforms are inverted on forecasts to return them into their original scale before calculating and error score.

LSTM Model

We will use a base stateful LSTM model with 1 neuron fit for 1000 epochs.

A batch size of 1 is required as we will be using walk-forward validation and making one-step forecasts for each of the final 12 months of test data.

A batch size of 1 means that the model will be fit using online training (as opposed to batch training or mini-batch training). As a result, it is expected that the model fit will have some variance.

Ideally, more training epochs would be used (such as 1500), but this was truncated to 1000 to keep run times reasonable.

The model will be fit using the efficient ADAM optimization algorithm and the mean squared error loss function.

Experimental Runs

Each experimental scenario will be run 10 times.

The reason for this is that the random initial conditions for an LSTM network can result in very different results each time a given configuration is trained.

Let’s dive into the experiments.

A vs A Test

A good first experiment is to evaluate how noisy or reliable our test harness may be.

This can be evaluated by running the same experiment twice and comparing the results. This is often called an A vs A test in the world of A/B testing, and I find this name useful. The idea is to flush out any obvious faults with the experiment and get a handle on the expected variance in the mean value.

We will run an experiment with a stateful LSTM on the network twice.

The complete code listing is provided below.

This code also provides the basis for all experiments in this tutorial. Rather than re-listing it for each variation in subsequent sections, I will only list the functions that have been changed.

Running the experiment saves the results to a file named “experiment_stateful.csv“.

Run the experiment a second time and change the filename written by the experiment to “experiment_stateful2.csv” as to not overwrite the results from the first run.

You should now have two sets of results in the current working directory in the files:

  • experiment_stateful.csv
  • experiment_stateful2.csv

We can now load and compare these two files. The script to do this is listed below.

This script loads the result files and first calculates descriptive statistics for each run.

We can see that the mean results and standard deviation are relatively close values (around 103-106 and 7-10 respectively). This is a good sign, but not perfect. It is expected that increasing the number of repeats of the experiment from 10 to 30, 100, or even 1000 would produce near identical summary statistics.

The comparison also creates a box and whisker plot to compare the two distributions.

The plot shows the 25th, 50th (median), and 75th percentile of 10 test RMSE results from each experiment. The box shows the middle 50% of the data and the green line shows the median.

The plot shows that although the descriptive statistics are reasonably close, the distributions do show some differences.

Nevertheless, the distributions do overlap and comparing means and standard deviations of different experimental setups is reasonable as long as we don’t quibble over modest differences in mean.

Box and Whisker Plot of A vs A Experimental Results

Box and Whisker Plot of A vs A Experimental Results

A good follow-up to this analysis is to review the standard error of the distribution with different sample sizes. This would involve first creating a larger pool of experimental runs from which to draw (100 or 1000), and would give a good idea of a robust number of repeats and an expected error on the mean when comparing results.

Stateful vs Stateless LSTMs

A good first experiment is to explore whether maintaining state in the LSTM adds value over not maintaining state.

In this section, we will contrast:

  1. A Stateful LSTM (first result from the previous section).
  2. A Stateless LSTM with the same configuration.
  3. A Stateless LSTM with shuffling during training.

The benefit of LSTM networks is their ability to maintain state and learn a sequence.

  • Expectation 1: The expectation is that the stateful LSTM will outperform the stateless LSTM.

Shuffling of input patterns each batch or epoch is often performed to improve the generalizability of an MLP network during training. A stateless LSTM does not shuffle input patterns during training because the network aims to learn the sequence of patterns. We will test a stateless LSTM with and without shuffling.

  • Expectation 2: The expectation is that the stateless LSTM without shuffling will outperform the stateless LSTM with shuffling.

The code changes to the stateful LSTM example above to make it stateless involve setting stateless=False in the LSTM layer and the use of automated training epoch training rather than manual. The results are written to a new file named “experiment_stateless.csv“. The updated fit_lstm() function is listed below.

The stateless with shuffling experiment involves setting the shuffle argument to True when calling fit in the fit_lstm() function. The results from this experiment are written to the file “experiment_stateless_shuffle.csv“.

The complete updated fit_lstm() function is listed below.

After running the experiments, you should have three result files for comparison:

  • experiment_stateful.csv
  • experiment_stateless.csv
  • experiment_stateless_shuffle.csv

We can now load and compare these results. The complete example for comparing the results is listed below.

Running the example first calculates and prints descriptive statistics for each of the experiments.

The average results suggest that the stateless LSTM configurations may outperform the stateful configuration. If robust, this finding is quite surprising as it does not meet the expectation of the addition of state improving performance.

The shuffling of training samples does not appear to make a large difference to the stateless LSTM. If the result is robust, the expectation of shuffled training order on the stateless LSTM does appear to offer some benefit.

Together, these findings may further suggest that the chosen LSTM configuration is focused more on learning input-output pairs rather than dependencies within the sequence.

From these limited results alone, one would consider exploring stateless LSTMs on this problem.

A box and whisker plot is also created to compare the distributions.

The spread of the data appears much larger with the stateful configuration compared to the stateless cases. This is also present in the descriptive statistics when we look at the standard deviation scores.

This suggests that the stateless configurations may be more stable.

Box and Whisker Plot of Test RMSE of Stateful vs Stateless LSTM Results

Box and Whisker Plot of Test RMSE of Stateful vs Stateless LSTM Results

Stateless with Large Batch vs Stateless

A key to understanding the difference between stateful and stateless LSTMs is “when internal state is reset”.

  • Stateless: In the stateless LSTM configuration, internal state is reset after each training batch or each batch when making predictions.
  • Stateful: In the stateful LSTM configuration, internal state is only reset when the reset_state() function is called.

If this is the only difference, then it may be possible to simulate a stateful LSTM with a stateless LSTM using a large batch size.

  • Expectation 3: Stateless and stateful LSTMs should produce near identical results when using the same batch size.

We can do this with the Shampoo Sales dataset by truncating the training data to only 12 months and leaving the test data as 12 months. This would allow a stateless LSTM to use a batch size of 12. If training and testing were performed in a one-shot manner (one function call), then it is possible that internal state of the “stateless” LSTM would not be reset and both configurations would produce equivalent results.

We will use the stateful results from the first experiment as a starting point. The forecast_lstm() function is modified to forecast one year of observations in a single step. The experiment() function is modified to truncate the training dataset to 12 months of data, to use a batch size of 12, and to process the batched predictions returned from the forecast_lstm() function. These updated functions are listed below. Results are written to the file “experiment_stateful_batch12.csv“.

We will use the stateless LSTM configuration from the previous experiment with training pattern shuffling turned off as the starting point. The experiment uses the same forecast_lstm() and experiment() functions listed above. Results are written to the file “experiment_stateless_batch12.csv“.

After running this experiment, you will have two result files:

  • experiment_stateful_batch12.csv
  • experiment_stateless_batch12.csv

We can now compare the results from these experiments.

Running the comparison script first calculates and prints the descriptive statistics for each experiment.

The average results for each experiment suggest equivalent results between the stateless and stateful configurations with the same batch size. This confirms our expectations.

If this result is robust, it suggests that there are no further implementation-detailed differences between stateless and stateful LSTM networks in Keras beyond when the internal state is reset.

A box and whisker plot is also created to compare the distributions.

The plot confirms the story in the descriptive statistics, perhaps just highlighting variability in the experimental design.

Box and Whisker Plot of Test RMSE of Stateful vs Stateless with Large Batch Size LSTM Results

Box and Whisker Plot of Test RMSE of Stateful vs Stateless with Large Batch Size LSTM Results

Stateful Resetting vs Stateless

Another question regarding stateful LSTMs is the best regime to perform resets to state.

Generally, we would expect that resetting the state after each presentation of the sequence would be a good idea.

  • Expectation 4: Resetting state after each training epoch results in better test performance.

This raises the question as to the best way to manage state when making predictions. For example, should the network be seeded with state from making predictions on the training dataset first?

  • Expectation 5: Seeding state in the LSTM by making predictions on the training dataset results in better test performance.

We would also expect that not resetting LSTM state between one-step predictions on the test set would be a good idea.

  • Expectation 6: Not resetting state between one-step predictions on the test set results in better test set performance.

There is also the question of whether or not resetting state at all is a good idea. In this section, we attempt to tease out answers to these questions.

We will again use all of the available data and a batch size of 1 for one-step forecasts.

In summary, we are going to compare the following experimental setups:

No Seeding:

  • noseed_1: Reset state after each training epoch and not during testing (the stateful results from the first experiment in experiment_stateful.csv).
  • noseed_2: Reset state after each training epoch and after each one-step prediction (experiment_stateful_reset_test.csv).
  • noseed_3: No resets after training or making one-step predictions (experiment_stateful_noreset.csv).

Seeding:

  • seed_1: Reset state after each training epoch, seed state with one-step predictions on training dataset before making one-step predictions on the test dataset (experiment_stateful_seed_train.csv).
  • seed_2: Reset state after each training epoch, seed state with one-step predictions on training dataset before making one-step predictions on the test dataset and reset state after each one-step prediction on train and test sets (experiment_stateful_seed_train_resets.csv).
  • seed_3: Seed on training dataset before making one-step predictions, no resets during training on predictions (experiment_stateful_seed_train_no_resets.csv).

The stateful experiment code from the first “A vs A” experiment is used as a base.

The modifications needed for the various resetting/no-resetting and seeding/no-seeding are listed below.

We can update the forecast_lstm() function to update after each test by adding a call to reset_states() on the model after each prediction is made. The updated forecast_lstm() function is listed below.

We can update the fit_lstm() function to not reset after each epoch by removing the call to reset_states(). The complete function is listed below.

We can seed the state of LSTM after training with the state from making predictions on the training dataset by looping through the training dataset and making one-step forecasts. This can be added to the run() function before making one-step forecasts on the test dataset. The updated run() function is listed below.

This concludes all of the piecewise modifications needed to create the code for these 6 experiments.

After running these experiments you will have the following results files:

  • experiment_stateful.csv
  • experiment_stateful_reset_test.csv
  • experiment_stateful_noreset.csv
  • experiment_stateful_seed_train.csv
  • experiment_stateful_seed_train_resets.csv
  • experiment_stateful_seed_train_no_resets.csv

We can now compare the results, using the script below.

Running the comparison prints descriptive statistics for each set of results.

The results for no seeding suggest perhaps little difference between resetting after each prediction on the test dataset and not. This suggests any state built up from prediction to prediction is not adding value, or that this state is implicitly cleared by the Keras API. This was a surprising result.

The results on the no-seed case also suggest that having no resets during training results in worse on average performance with larger variance than resetting the state at the end of each epoch. This confirms the expectation that resetting the state at the end of each training epoch is a good practice.

The average results from the seed experiments suggest that seeding LSTM state with predictions on the training dataset before making predictions on the test dataset is neutral, if not resulting in slightly worse performance.

Resetting state after each prediction on the train and test sets seem to result in slightly better performance, whereas not resetting state during training or testing seems to result in the best performance.

These results regarding seeding are surprising, but we should note that the mean values are all within a test RMSE of 5 monthly shampoo sales and could be statistical noise.

A box and whisker plot is also created to compare the distributions.

The plot tells the same story as the descriptive statistics. It highlights the increased spread when no resets are used on the stateless LSTM without seeding. It also highlights the general tight spread on the experiments that seed the state of the LSTM with predictions on the training dataset.

Box and Whisker Plot of Test RMSE of Reset Regimes in Stateful LSTMs

Box and Whisker Plot of Test RMSE of Reset Regimes in Stateful LSTMs

Review of Findings

In this section, we recap the findings throughout this tutorial.

  • 10 repeats of an experiment with the chosen configuration results in some variation in the mean and standard deviation of the test RMSE of about 3 monthly shampoo sales. More repeats would be expected to tighten this up.
  • The stateless LSTM with the same configuration may perform better on this problem than the stateful version.
  • Not shuffling training patterns with the stateless LSTM may result in slightly better performance.
  • When a large batch size is used, a stateful LSTM can be simulated with a stateless LSTM.
  • Resetting state when making one-step predictions with a stateful LSTM may improve performance on the test set.
  • Seeding state in a stateful LSTM by making predictions on the training dataset before making predictions on the test set does not result in an obvious improvement in performance on the test set.
  • Fitting a stateful LSTM and seeding it on the training dataset and not performing any resetting of state during training or prediction may result in better performance on the test set.

It must be noted that these findings should be made more robust by increasing the number of repeats of each experiment and confirming the differences are significant using statistical significance tests.

It should also be noted that these results apply to this specific problem, the way it was framed, and the chosen LSTM configuration parameters including topology, batch size, and training epochs.

Summary

In this tutorial, you discovered how to investigate the impact of using stateful vs stateless LSTM networks for time series forecasting in Python with Keras.

Specifically, you learned:

  • How to compare stateless vs stateful LSTM networks for time series forecasting.
  • How to confirm the equivalence of stateless LSTMs and stateful LSTMs with a large batch size.
  • How to evaluate the impact of when LSTM state is reset during training and making predictions with LSTM networks for time series forecasting.

Do you have any questions? Ask your questions in the comments and I will do my best to answer.


Develop Deep Learning models for Time Series Today!

Deep Learning for Time Series Forecasting

Develop Your Own Forecasting models in Minutes

…with just a few lines of python code

Discover how in my new Ebook:
Deep Learning for Time Series Forecasting

It provides self-study tutorials on topics like: CNNs, LSTMs,
Multivariate Forecasting, Multi-Step Forecasting and much more…

Finally Bring Deep Learning to your Time Series Forecasting Projects

Skip the Academics. Just Results.

Click to learn more.


49 Responses to Stateful and Stateless LSTM for Time Series Forecasting with Python

  1. Sam Taha April 27, 2017 at 3:59 am #

    Hi Jason,

    Great subject and article.

    How do we deal with the case when there the data has multiple features/labels per time series that we have suspicion have strong correlation (assumed to be strong). For example:

    YR Month Production Sales
    1999 1 Shampoo $400
    1999 1 Conditioner $300
    1999 2 Shampoo $410
    1999 2 Conditioner $305

    And so in this case we would like to be build one single model to predict Shampoo/Conditioner sales.

    Is this configured by setting to the batch to 2 stateful (or 24 in the case of stateless) or am I looking at this all wrong?

    • Jason Brownlee April 27, 2017 at 8:47 am #

      Same as any other algorithm.

      Explore using all features in the model, explore removing highly corrected features and see how that affects the model.

  2. chris May 23, 2017 at 3:42 am #

    Hi Jason,
    I follow your blog entries regarding lstms and time series quite a while and I like them really well. I have a question about something that does not fit 100% on the blog posts, but maybe you would like to share your ideas with me as I am relatively new in this area. I have a record consisting of 61 multivariate timeseries. Each time I have assigned a label in the preprocessing (0, 1 or 2).
    I would like to make a multi-class classification with lstm’s.
    As the starting point, I used the following:

    model.add(LSTM(61, input_shape=(1, 61)))
    model.add(Dense(3, activation=’softmax’))
    model.compile(loss=’categorical_crossentropy’, optimizer=’adam’, metrics=[‘accuracy’])

    and learn with a batch size of 1 over 150 epochs. The dataset is divided into a training and test set.
    I already have an accuracy of 97%. I think this is partly due to the data distribution, because label one occurs at about 94% and the other two about equally often. Nevertheless, it’s better than ever predict label one.

    In the next steps, I will first let the batch size vary, and manage the memory cell.

    Do you have other suggestions to increase the accuracy or any other things to do differently based on your experience? Or is there a reason to change the problem rather to a forecast problem?

    Generally I have 3 problems:
    1) The predicted labels are not 100% overlapping with the true labels. What I think is not quite so bad
    2) I have something like jumping / oscillating values between two labels. I think that this will be improved with other parameter settings. But generally is there a recommended standard technique to treat these labels again, as for example like the moving average?
    3)Completely wrongly labeled areas

    Would you recommend to adjust the distribution of the data, so that each label equally often occur?
    Many greetings,
    chris

  3. Chris May 23, 2017 at 3:25 pm #

    Good Morning, thanks for your quick reply. I started using the confusion matrix yesterday 😆. At the log loss I haven’t thought.
    The feature engineering idea sounds really good,since I always have corridors of the same size for the 0 and 2 labels. I think, I’ll look at the rebalancing later and first try the other appoaches. Thanks for the advices! I will keep you up to date.

    • Jason Brownlee May 24, 2017 at 4:52 am #

      Nice work, let me know how you go.

      • chris June 5, 2017 at 5:34 am #

        Hi Jason,
        thanks again for the last tips.
        Currently I get quite good results. The AUC for each of my three classes is already between 96-99 and also the F-Measure is better than in all models used so far.
        I still have a few jumps in the labels, but I think I can certainly reduce them.

        I now have the problem that I currently use 37 or even 62 different features.
        I would like to perform a feature selection. Unfortunately, I have found nothing for keras / neural networks. Do you already know something that can be used for neural networks or even has experience with it? Or is that no standard practice for NN? Thank you

  4. George Heitzer May 30, 2017 at 8:53 am #

    Hi Jason, when running the second ( large ) piece of code using PyCharm I get “UnicodeDecodeError: ‘utf-8’ codec can’t decode byte 0xf6 in position 22: invalid start byte”, it seems to work using Spyder , however
    best regards George

    • Jason Brownlee June 2, 2017 at 12:31 pm #

      I recommend running all code from the command line.

  5. Kris June 22, 2017 at 12:11 am #

    Hi Jason,

    Great post! Maybe you would be able to help me with my problem?

    I have a sequential data representing moving targets recorded by a radar. Sequences of some targets are longer than the others.

    For example,

    I have labeled data of cars and their velocities, accelerations etc.

    ‘c’ represents a car with a 3-dimensional feature vector, age is the number of times a target was recorded by the radar. Label means different types of cars such as truck etc.

    c1 = [2,3,5], label = 0, age = 1
    c2 = [2,4,7], label = 1, age = 1
    c3 = [5,6,3], label = 2, age = 1
    c1 = [4,5,7], label = 0, age = 2
    c1 = [5,7,8], label = 0, age = 3
    c2 = [6,7,4], label = 1, age = 2
    c1 = [1,3,8], label = 0, age = 4
    c3 = [5,6,3], label = 2, age = 2

    As you can see, the sequences of some targets are longer than the others.

    My question is how could I account for it while creating an LSTM model?

    For example, by choosing window size 2, I would get [c1, c2, c3], [c2, c3, c1] and so on…

    What happens to the labels in this case?

    This is a classification problem so would stateful or stateless network be more appropriate?

    Thank you,
    Kris

    • Jason Brownlee June 22, 2017 at 6:08 am #

      Sorry, I’m not sure I follow.

      Consider providing the entire sequences as input time steps.

      Also consider padding sequences to make the same length if the number of time steps differ.

  6. Tryfon August 8, 2017 at 6:28 am #

    Hi Jason! I have some second thoughts about the stateless lstm.

    The main purpose of the LSTM is to utilize its memory property. Based on that what is the point of a stateless LSTM to exist? Don’t we “convert” it into a simple NN by doing that?

    In other words.. Does the stateless use of LSTM aim to model the sequences (window) in the input data – if we apply shuffle=False in the fit layer in keras – (eg. for a window of 10 time steps capture any pattern between 10-character words)? If yes why don’t we convert the initial input data to match the form of the sequencers under inspection and then use a plain NN (by adding extra columns to the original dataset that are shifted)?

    If we choose to have shuffle = True then we are losing any information that could be found in our data (e.g. time series data – sequences), don’t we? In that case I would expect in to behave similarly to a plain NN and get the same results between the two by setting the same random seed.

    Am I missing something in my thinking?

    • Jason Brownlee August 8, 2017 at 7:56 am #

      The “stateless” LSTM just means that internal state is reset at the end of each batch, which works well in practice on many problems.

      Really, maintaining state is part of the trade-off in backprop through time and input sequence length.

      Shuffle applies to samples within a batch. BPTT really looks at time steps within a sample, then averages the gradient across the batch.

      Does that help?

  7. Tryfon August 8, 2017 at 10:41 pm #

    Regarding the shuffling according to the documentation “shuffle: boolean or str (for ‘batch’). Whether to shuffle the samples at each epoch”. So it first resamples the data (i.e. changes the original order) and then on the new order creates the batches. Do I get it right?

    Let me restate my previous question because I might have confused you. Suppose the dataset has only one variable X and one label Y. I actually want to know whether a stateless LSTM of batch size say 5 and timestep 1 is equivalent to a NN that will get as input X and X.shift(1) (so in total 2 inputs (2 columns) although they point to the same original X column of my dataset and batch size also 5.

    Thanks in advance and congrats on your helpful website!

    • Jason Brownlee August 9, 2017 at 6:34 am #

      Even if you frame the sequence problem the same way for LSTMs and MLPs, the units inside each network are different (e.g. LSTMs have memory and gates). In turn, results will probably differ.

      I would encourage you to test both types of networks on your problem, and most importantly, brainstorm many different ways to frame your sequence prediction problem to see which works best.

  8. YJ November 20, 2017 at 6:28 pm #

    My question doesn’t have much to do with LSTM.

    “Transform the observations to have a specific scale. Specifically, to rescale the data to values between -1 and 1 to meet the default hyperbolic tangent activation function of the LSTM model.”

    I have often ran into this comment where data should be prepared to fit the Y range of the activation function. Consequently, I’ve been trying to find the intuition or theoretical reason behind such transformation, but could not find any besides discussions on SO.

    I would much appreciate if you could provide your wisdom regarding following questions.
    a) why such transformation is necessary
    b) is the transformation applicable to all other activation functions (e.g. [0,1] for sigmoid and on)?

    Sincerely,
    YJ

    • Jason Brownlee November 22, 2017 at 10:39 am #

      Generally, normalizing or standardizing data doe help with neural nets. I would recommend testing with and without it and see how it impacts model skill.

  9. Stefano December 20, 2017 at 1:13 pm #

    Hi Jason,
    what do you think of using a callback to reset states so that it is possible to use model.fit() on the entire set? Is there any reason to not do this?

    Best regards

    • Jason Brownlee December 20, 2017 at 3:50 pm #

      If it’s a good fit for your model/setup, go for it.

  10. arun December 27, 2017 at 6:14 pm #

    Hello,
    Thanks for the wonderful post .
    I have a couple of questions on the way these long sequences have to be handled when traininig a LSTM network :
    Suppose I have a sequence classification task for which I have a set of 100 sequences with each sequence of varying length in the range 1000 – 2000 (samples). I would need the sequence classification task to identify sequences at regular intervals say every 10 or 20 samples within a sequence

    Input Sequences :
    Sequence 1 : s1_1,s1_2,s1_3………………….s1_1000
    Sequence 2 : s2_1,s2_2,s2_3………………….s2_1500
    Sequence 3 : s3_1,s3_2,s3_3………………….s3_2000
    .
    .
    .
    Sequence 100 : s100_1,s100_2,s100_3………………….s100_1100

    a. How do I preprocess the data i.e break down the data in to subsequences for training ? Is the sub-sequence length based on the dependency of output over the number of input Samples ?

    b. If my output of Sequence classification depends on say last 20 samples within a sequence, how do I Split the input data sequence for training ?

    i.Should it be this way : Overlapping sub-sequences and Stateless – LSTM
    s1_1,s1_2,…….s1_20
    s1_2,s1_3……..s1_21
    s1_3,s1_4……..s1_22

    ii.Or Should it be this way : Non-overlapping sub-sequences and Stateful – LSTM
    s1_1,s1_2,…….s1_20
    s1_21,s1_22……..s1_40
    s1_41,s1_42……..s1_60

    Which among the above mentioned options(i and ii)is right? and Why ?

    Does the approach ii. learn dependencies longer than 20 samples … as the state is carry forwarded after each sub-sequence? If yes till what extent (number of samples … say 60 or 100)?

    c. If my output of Sequence classification depends on say last 900 samples within a sequence, can the LSTM solve/address this problem ? If yes what would the split of a single training sequence be in such a situation and what would be the LSTM implementation be(Stateful or Stateless)?

  11. Nat March 8, 2018 at 12:48 am #

    Hi Dr. Jason,

    Thanks alot for your tutorials they are very helpful.

    I am trying to implement the stateless LSTM without shuffle. I basically used your code as it is with only the changes that you suggested, but unfortunately i am getting the following error:

    GPU sync failed

    By any chance do you have any idea why am I getting this error?

    Thank you
    Nat

    • Jason Brownlee March 8, 2018 at 6:32 am #

      Looks like an issue with your Python environment. Perhaps try posting the error to stackoverflow?

  12. Max May 4, 2018 at 12:36 am #

    Hi Jason,

    I am working on an industry problem in which we are trying to predict scrap rates in a manufacturing line based on large datasets (machine data, sensor data, …). One approach is to model the problem as a time series (sequence) regression problem in RNNs. I am frequently using your blog as incredily helpful resource (thanks!).

    The prototyping is done in Keras and therefore, I have the following question:

    Two parameters suggest to influence sequence learning problems:
    – batch_size in model.fit(batch_size)
    – time_steps in layers.LSTM(input_shape(samples,time_steps,obs)

    -> If batch_size < time_steps , doesn't the internal state get reset too frequently and causes a problem with the BPTT?

    As an example, suppose we have a sequence of length 50 (time_steps=50) and a training batch size of 25 e.g. for stochastic gradient descent (batch_size=25). Even though TBPTT(50,50) is set up to learn sequence patterns from 50 time steps, can the internal state keep the information?

    Thanks much and regards from Germany
    Max

    • Jason Brownlee May 4, 2018 at 7:47 am #

      Time steps and batch size are not related. Batch size covers the number of samples, where time steps refers to one sample.

      Does that help?

  13. Ramzy December 1, 2018 at 10:25 am #

    Hi Jason,
    What about variable batch size for LSTM stateful
    https://stackoverflow.com/questions/53489606/keras-variable-batch-size-for-stateful-lstm

    • Jason Brownlee December 2, 2018 at 6:14 am #

      Perhaps you can summarize the content of the link for me?

      • Ramzy December 6, 2018 at 4:50 am #

        Oh, I will try to summarize here so if something not clear please tell me.
        First, i have solved my problem in the link thanks to your tutorial on saving weights for prediction in different batch size LSTM. yet i have issue in accuracy.
        second, i have 4 features, that i am using the last one in training and also as a target. thanks to your tutorials i was able to reshape the data and make the below stateful architecture, note its window size of 10

        n_batch = X_train[0].shape[0]
        n_epoch = 25
        n_neurons = 256

        model = Sequential()
        model.add(LSTM(n_neurons, batch_input_shape=(n_batch, X_train[0].shape[1], X_train[0].shape[2]), stateful=True))
        model.add(Dense(1))
        model.compile(loss=’mae’, optimizer=’adam’, metrics=[‘accuracy’])

        # fit network
        for i in range(len(X_train)):
        if X_train[i].shape[0] == 250:
        model.fit(X_train[i], y_train[i], epochs=n_epoch, batch_size=n_batch, verbose=1, shuffle=False)
        #model.reset_states()

        note that commenting the reset_states or not doesn’t affect the accuracy

        Third, i am not sure if i am calculating the accuracy and loss right, also if i am using the proper optimizer, also what activation functions to use, should i stack more LSTM. why stateful doesn’t affect the learning.

        Also the last feature has large positive and negative values, not more than 1000. so what is the proper way to normalize that.

        • Ramzy December 6, 2018 at 6:01 am #

          i tried to normalize using the below line, but nothing changed, same 0% accuracy!!
          m15 = m15.assign(NormDirHeight=(m15[‘DirHeight’]-m15[‘DirHeight’].mean())/m15[‘DirHeight’].std())

        • Jason Brownlee December 6, 2018 at 6:02 am #

          Perhaps normalize the data using the MinMaxScaler from the scikit-learn library?

          • Ramzy December 6, 2018 at 8:35 am #

            First, Thank you so much and your tutorials are super awesome and super fun, this is something i wanted to say.
            Sorry for dumping the log here
            Okay, so using mae for metrics, i got the below, without MinMaxScaler

            Epoch 1/25
            250/250 [==============================] – 1s 3ms/step – loss: 140.9831 – mean_absolute_error: 140.9831
            Epoch 2/25
            250/250 [==============================] – 0s 440us/step – loss: 140.9362 – mean_absolute_error: 140.9362
            Epoch 3/25

            250/250 [==============================] – 0s 464us/step – loss: 140.8762 – mean_absolute_error: 140.8762
            Epoch 4/25
            250/250 [==============================] – 0s 456us/step – loss: 140.8182 – mean_absolute_error: 140.8182
            Epoch 5/25
            250/250 [==============================] – 0s 464us/step – loss: 140.7660 – mean_absolute_error: 140.7660
            Epoch 6/25

            250/250 [==============================] – 0s 440us/step – loss: 140.7117 – mean_absolute_error: 140.7117
            Epoch 7/25
            250/250 [==============================] – 0s 456us/step – loss: 140.6606 – mean_absolute_error: 140.6606
            Epoch 8/25
            250/250 [==============================] – 0s 440us/step – loss: 140.6121 – mean_absolute_error: 140.6121
            Epoch 9/25

            250/250 [==============================] – 0s 512us/step – loss: 140.5622 – mean_absolute_error: 140.5622
            Epoch 10/25

            Okay, so using mae for accuracy metrics, i got the below, with MinMaxScaler (-1,1)

            Epoch 1/25
            250/250 [==============================] – 1s 3ms/step – loss: 0.0601 – mean_absolute_error: 0.0601
            Epoch 2/25
            250/250 [==============================] – 0s 444us/step – loss: 0.1010 – mean_absolute_error: 0.1010
            Epoch 3/25
            250/250 [==============================] – 0s 456us/step – loss: 0.0610 – mean_absolute_error: 0.0610
            Epoch 4/25

            250/250 [==============================] – 0s 456us/step – loss: 0.0732 – mean_absolute_error: 0.0732
            Epoch 5/25
            250/250 [==============================] – 0s 480us/step – loss: 0.0759 – mean_absolute_error: 0.0759
            Epoch 6/25
            250/250 [==============================] – 0s 524us/step – loss: 0.0619 – mean_absolute_error: 0.0619
            Epoch 7/25

            250/250 [==============================] – 0s 480us/step – loss: 0.0597 – mean_absolute_error: 0.0597
            Epoch 8/25
            250/250 [==============================] – 0s 484us/step – loss: 0.0613 – mean_absolute_error: 0.0613
            Epoch 9/25

            But if i am not calculating accuracy how could i know that this is good or bad ?
            Also why is it always the same either reset_states() or without it ?
            The Stateful is very confusing, nevertheless going from here to predicting multiple steps using TimeDistrubed is a whole new fun journey xD

          • Ramzy December 6, 2018 at 10:07 am #

            I was able to show the actual and predicted after applying inverse transform, clearly its a mess. Is it the data or the architecture, how could i know ?

            >Expected=182.0, Predicted=-3.9
            >Expected=-73.0, Predicted=-31.3
            >Expected=-49.0, Predicted=-10.6
            >Expected=48.0, Predicted=-7.8
            >Expected=46.0, Predicted=-12.8
            >Expected=-41.0, Predicted=-19.9
            >Expected=-66.0, Predicted=-13.6
            >Expected=22.0, Predicted=-1.8
            >Expected=87.0, Predicted=-14.1
            >Expected=47.0, Predicted=-33.3
            >Expected=31.0, Predicted=-30.5

          • Ramzy December 7, 2018 at 12:16 am #

            Okay, i have one last question, How could i know that the problem is not in my data ?
            i mean that i always get the same bad accuracy, is it the data ? is there something that check the consistency of the data that it would be valid to make a regression function for it with LSTM ?

          • Jason Brownlee December 7, 2018 at 5:22 am #

            Start with a naive baseline, then evaluate models against that baseline to see if they are skillful. I explain this process here:
            https://machinelearningmastery.com/how-to-develop-a-skilful-time-series-forecasting-model/

          • Ramzy December 7, 2018 at 1:48 pm #

            Thank you so much, i have read the article carefully, analyzed how much of it i did and which steps did i skip, and put an action plan. really great blog and thanks for your nice and fast replies)

          • Jason Brownlee December 8, 2018 at 6:57 am #

            Thanks.

  14. Somayyeh January 8, 2019 at 6:27 am #

    Thanks for great post!
    I have a question. I try to train a lstm autoencoder on signals. I wonder what’s difference between

    1) I use a stateless network and send whole signal as input and set batch size to 1

    2) I use a stateful network and in for loop reset state after train_on_batch en each input signal
    ??

  15. Devakar Kumar Verma January 25, 2019 at 5:40 pm #

    Hi Jason,
    In Stateful model, callbacks of Keras(EarlyStopping, ReduceLROnPlateau, and ModelCheckpoint) not working.
    As we are resetting the states after each iteration and in between each iteration there is only one epoch, hence Keras is not able to find logs of previous epochs, hence not able to apply above mention callbacks.
    So, How can I implement EarlyStopping, ReduceLROnPlateau, and ModelCheckpoint?

    • Jason Brownlee January 26, 2019 at 6:09 am #

      If you are driving the epochs manually, then the perhaps callbacks are not needed (just an idea?), you can run their operations manually as well. E.g. evaluate the model and see if you the next iteration is required or not.

  16. nan April 5, 2019 at 10:37 pm #

    Hello Brownlee,

    Have you ever read any paper discuss the stateful and stateless implementation?

    Thanks

  17. jessy April 20, 2019 at 11:25 am #

    sir,
    stateful and stateless lstm possible for multivariate time series dataset

  18. jessy April 23, 2019 at 4:22 pm #

    Thanks..useful blog ..useful post…great

Leave a Reply