How to Use Timesteps in LSTM Networks for Time Series Forecasting

Last Updated on

The Long Short-Term Memory (LSTM) network in Keras supports time steps.

This raises the question as to whether lag observations for a univariate time series can be used as time steps for an LSTM and whether or not this improves forecast performance.

In this tutorial, we will investigate the use of lag observations as time steps in LSTMs models in Python.

After completing this tutorial, you will know:

  • How to develop a test harness to systematically evaluate LSTM time steps for time series forecasting.
  • The impact of using a varied number of lagged observations as input time steps for LSTM models.
  • The impact of using a varied number of lagged observations and matching numbers of neurons for LSTM models.

Discover how to build models for multivariate and multi-step time series forecasting with LSTMs and more in my new book, with 25 step-by-step tutorials and full source code.

Let’s get started.

  • Updated Apr/2019: Updated the link to dataset.
How to Use Timesteps in LSTM Networks for Time Series Forecasting

How to Use Timesteps in LSTM Networks for Time Series Forecasting
Photo by YoTuT, some rights reserved.

Tutorial Overview

This tutorial is divided into 4 parts. They are:

  1. Shampoo Sales Dataset
  2. Experimental Test Harness
  3. Experiments with Time Steps
  4. Experiments with Time Steps and Neurons

Environment

This tutorial assumes you have a Python SciPy environment installed. You can use either Python 2 or 3 with this example.

This tutorial assumes you have Keras v2.0 or higher installed with either the TensorFlow or Theano backend.

This tutorial also assumes you have scikit-learn, Pandas, NumPy, and Matplotlib installed.

If you need help setting up your Python environment, see this post:

Need help with Deep Learning for Time Series?

Take my free 7-day email crash course now (with sample code).

Click to sign-up and also get a free PDF Ebook version of the course.

Download Your FREE Mini-Course

Shampoo Sales Dataset

This dataset describes the monthly number of sales of shampoo over a 3-year period.

The units are a sales count and there are 36 observations. The original dataset is credited to Makridakis, Wheelwright, and Hyndman (1998).

The example below loads and creates a plot of the loaded dataset.

Running the example loads the dataset as a Pandas Series and prints the first 5 rows.

A line plot of the series is then created showing a clear increasing trend.

Line Plot of Shampoo Sales Dataset

Line Plot of Shampoo Sales Dataset

Next, we will take a look at the LSTM configuration and test harness used in the experiment.

Experimental Test Harness

This section describes the test harness used in this tutorial.

Data Split

We will split the Shampoo Sales dataset into two parts: a training and a test set.

The first two years of data will be taken for the training dataset and the remaining one year of data will be used for the test set.

Models will be developed using the training dataset and will make predictions on the test dataset.

The persistence forecast (naive forecast) on the test dataset achieves an error of 136.761 monthly shampoo sales. This provides a lower acceptable bound of performance on the test set.

Model Evaluation

A rolling-forecast scenario will be used, also called walk-forward model validation.

Each time step of the test dataset will be walked one at a time. A model will be used to make a forecast for the time step, then the actual expected value from the test set will be taken and made available to the model for the forecast on the next time step.

This mimics a real-world scenario where new Shampoo Sales observations would be available each month and used in the forecasting of the following month.

This will be simulated by the structure of the train and test datasets.

All forecasts on the test dataset will be collected and an error score calculated to summarize the skill of the model. The root mean squared error (RMSE) will be used as it punishes large errors and results in a score that is in the same units as the forecast data, namely monthly shampoo sales.

Data Preparation

Before we can fit an LSTM model to the dataset, we must transform the data.

The following three data transforms are performed on the dataset prior to fitting a model and making a forecast.

  1. Transform the time series data so that it is stationary. Specifically, a lag=1 differencing to remove the increasing trend in the data.
  2. Transform the time series into a supervised learning problem. Specifically, the organization of data into input and output patterns where the observation at the previous time step is used as an input to forecast the observation at the current time timestep
  3. Transform the observations to have a specific scale. Specifically, to rescale the data to values between -1 and 1 to meet the default hyperbolic tangent activation function of the LSTM model.

These transforms are inverted on forecasts to return them into their original scale before calculating and error score.

LSTM Model

We will use a base stateful LSTM model with 1 neuron fit for 500 epochs.

A batch size of 1 is required as we will be using walk-forward validation and making one-step forecasts for each of the final 12 months of test data.

A batch size of 1 means that the model will be fit using online training (as opposed to batch training or mini-batch training). As a result, it is expected that the model fit will have some variance.

Ideally, more training epochs would be used (such as 1000 or 1500), but this was truncated to 500 to keep run times reasonable.

The model will be fit using the efficient ADAM optimization algorithm and the mean squared error loss function.

Experimental Runs

Each experimental scenario will be run 10 times.

The reason for this is that the random initial conditions for an LSTM network can result in very different results each time a given configuration is trained.

Let’s dive into the experiments.

Experiments with Time Steps

We will perform 5 experiments, each will use a different number of lag observations as time steps from 1 to 5.

A representation with 1 time step would be the default representation when using a stateful LSTM. Using 2 to 5 timesteps is contrived. The hope would be that the additional context from the lagged observations may improve the performance of the predictive model.

The univariate time series is converted to a supervised learning problem before training the model. The specified number of time steps defines the number of input variables (X) used to predict the next time step (y). As such, for each time step used in the representation, that many rows must be removed from the beginning of the dataset. This is because there are no prior observations to use as time steps for the first values in the dataset.

The complete code listing for testing 1 time step is listed below.

The time steps parameter in the run() function is varied from 1 to 5 for each of the 5 experiments. In addition, the results are saved to file at the end of the experiment and this filename must also be changed for each different experimental run; e.g.: experiment_timesteps_1.csv, experiment_timesteps_2.csv, etc.

Run the 5 different experiments for the 5 different numbers of time steps.

You can run them in parallel if you have sufficient memory and CPU resources. GPU resources are not required for these experiments and experiments should be complete in minutes to tens of minutes.

After running the experiments, you should have 5 files containing the results, as follows:

We can write some code to load and summarize these results.

Specifically, it is useful to review both descriptive statistics from each run and compare the results for each run using a box and whisker plot.

Code to summarize the results is listed below.

Running the code first prints descriptive statistics for each set of results.

We can see from the average performance alone that the default of using a single time step resulted in the best performance. This is also shown when reviewing the median test RMSE (50th percentile).

A box and whisker plot comparing the distributions of results is also created.

The plot tells the same story as the descriptive statistics. There is a general trend of increasing test RMSE as the number of time steps is increased.

Box and Whisker Plot of Timesteps vs RMSE

Box and Whisker Plot of Timesteps vs RMSE

The expectation of increased performance with the increase of time steps was not observed, at least with the dataset and LSTM configuration used.

This raises the question as to whether the capacity of the network is a limiting factor. We will look at this in the next section.

Experiments with Time Steps and Neurons

The number of neurons (also called blocks) in the LSTM network defines its learning capacity.

It is possible that in the previous experiments the use of one neuron limited the learning capacity of the network such that it was not capable of making effective use of the lagged observations as time steps.

We can repeat the above experiments and increase the number of neurons in the LSTM with the increase in time steps and see if it results in an increase in performance.

This can be achieved by changing the line in the experiment function from:

to

In addition, we can keep the results written to file separate from the results created in the first experiment by adding a “_neurons” suffix to the filenames, for example, changing:

to

Repeat the same 5 experiments with these changes.

After running these experiments, you should have 5 result files.

As in the previous experiment, we can load the results, calculate descriptive statistics, and create a box and whisker plot. The complete code listing is below.

Running the code first prints descriptive statistics from each of the 5 experiments.

The results tell a similar story to the first set of experiments with a one neuron LSTM. The average test RMSE appears lowest when the number of neurons and the number of time steps is set to one.

A box and whisker plot is created to compare the distributions.

The trend in spread and median performance almost shows a linear increase in test RMSE as the number of neurons and time steps is increased.

The linear trend may suggest that the increase in network capacity is not given sufficient time to fit the data. Perhaps an increase in the number of epochs would be required as well.

Box and Whisker Plot of Timesteps and Neurons vs RMSE

Box and Whisker Plot of Timesteps and Neurons vs RMSE

Extensions

This section lists some areas for further investigation that you may consider exploring.

  • Lags as Features. The use of lagged observations as time steps also raises the question as to whether lagged observations can be used as input features. It is not clear whether time steps and features are treated the same way internally by the Keras LSTM implementation.
  • Diagnostic Run Plots. It may be helpful to review plots of train and test RMSE over epochs for multiple runs for a given experiment. This might help tease out whether overfitting or underfitting is taking place, and in turn, methods to address it.
  • Increase Training Epochs. An increase in neurons in the LSTM in the second set of experiments may benefit from an increase in the number of training epochs. This could be explored with some follow-up experiments.
  • Increase Repeats. Using 10 repeats results in a relatively small population of test RMSE results. It is possible that increasing repeats to 30 or 100 (or even higher) may result in a more stable outcome.

Did you explore any of these extensions?
Share your findings in the comments below; I’d love to hear what you found.

Summary

In this tutorial, you discovered how to investigate using lagged observations as input time steps in an LSTM network.

Specifically, you learned:

  • How to develop a robust test harness for experimenting with input representation with LSTMs.
  • How to use lagged observations as input time steps for time series forecasting with LSTMs.
  • How to increase the learning capacity of the network with the increase of time steps.

You discovered that the expectation that the use of lagged observations as input time steps did not decrease the test RMSE on the chosen problem and LSTM configuration.

Do you have any questions?
Ask your questions in the comments below and I will do my best to answer them.

Develop Deep Learning models for Time Series Today!

Deep Learning for Time Series Forecasting

Develop Your Own Forecasting models in Minutes

...with just a few lines of python code

Discover how in my new Ebook:
Deep Learning for Time Series Forecasting

It provides self-study tutorials on topics like:
CNNs, LSTMs, Multivariate Forecasting, Multi-Step Forecasting and much more...

Finally Bring Deep Learning to your Time Series Forecasting Projects

Skip the Academics. Just Results.

See What's Inside

89 Responses to How to Use Timesteps in LSTM Networks for Time Series Forecasting

  1. Hasan April 17, 2017 at 12:29 pm #

    The problem with using lagged values as predictors is that the model misses out the subtle time dependencies which are usually captured by the time series models.

    • Jason Brownlee April 18, 2017 at 8:29 am #

      Agreed. The promise of LSTMS is to learn the temporal dependence.

      • Hasan April 19, 2017 at 7:52 pm #

        So LSTM will work for all kinds of time series?

        • Jason Brownlee April 20, 2017 at 9:24 am #

          Yes, but test other methods and double down on what works best on your problem.

      • Will March 15, 2018 at 4:06 am #

        Just wanted to let you know that this is the most lucid explanation of how these LSTM’s work under the hood. Thank you for that. I purchased your Practical Machine Learning Book with the Excel Samples and that was great too! I will certainly try to spread the word.

  2. Kunpeng Zhang April 18, 2017 at 1:02 pm #

    Hi Jason,
    Your posts are always helpful.
    Now, I get two similar data sets. I’d like to train this data using multitask model in keras. To be percise, I have two input data sets and I want to get two output separately in one train model.
    Is it possible in keras? I get some content. https://keras.io/getting-started/functional-api-guide/
    But I still do not figure it out how. Could you give me some advice?

    • Jason Brownlee April 19, 2017 at 7:49 am #

      Almost all neural nets can have multiple output values.

      Just frame your dataset and set the number of outputs you require in the output layer of the network.

  3. Kunpeng Zhang April 18, 2017 at 1:06 pm #

    Another question. Compared with tensorflow, a fine-tuned keras model will get a better result or a worse one? Is it comparable?

    • Jason Brownlee April 19, 2017 at 7:50 am #

      Keras is built on top of TensorFlow. Comparing results from the two does not make sense (at least to me).

      • Kunpeng Zhang April 20, 2017 at 10:25 am #

        Thank you for your reply.
        Have a good day.

  4. Jack Brown April 18, 2017 at 9:06 pm #

    Hi Jason,
    could you elaborate this line

    train = train.reshape(train.shape[0], train.shape[1])

    isn’t this the same?

    • Jason Brownlee April 19, 2017 at 7:52 am #

      It does look that way, I may have been too excited with all the resizing. Try removing it and see if all is well.

  5. Jay Reynolds May 26, 2017 at 11:26 am #

    “Lags as Features. The use of lagged observations as time steps also raises the question as to whether lagged observations can be used as input features. It is not clear whether time steps and features are treated the same way internally by the Keras LSTM implementation.”

    Any further thoughts on this?
    I’m a little confused on how to use timesteps when some input features are lagged and some are not. (really, I’m fundamentally confused as to why timesteps exists at all, given that it would seem any lagged input should just be treated as features). There’s surprisingly little clear information on the matter of LSTM timesteps on the internet… I don’t recall ever coming across the concept of timesteps in any of Schmidhuber, et al papers, either (perhaps I wasn’t paying attention!)

    Thanks for the great resource you’ve put together and continue to share, btw.

    • Jason Brownlee June 2, 2017 at 11:52 am #

      Yes, I was wrong.

      Features are weighted inputs. Timesteps are discrete inputs of features over time. (does that make sense, it reads poorly…)

      The key to understanding timesteps is the BPTT algorithm. I have a post on this scheduled.

    • John Jaro July 2, 2017 at 1:27 am #

      “I’m a little confused on how to use timesteps when some input features are lagged and some are not. (really, I’m fundamentally confused as to why timesteps exists at all, given that it would seem any lagged input should just be treated as features). There’s surprisingly little clear information on the matter of LSTM timesteps on the internet…”

      This is 100% my question, I’ve done so much Googling (and read multiple of Jason’s posts) and I still don’t understand this at all. Cannot figure out how to prep lagged time steps + features for LSTM.

      • Jason Brownlee July 2, 2017 at 6:33 am #

        Lagged obs are time steps in LSTMs.

        LSTM input is 3d: [samples, time steps, features]. If your series is univariate, you one many time steps and one feature. If you want to classify one day of data, you have one sample, 25 hours of time steps and one feature.

        Does that help?

  6. lawrance May 27, 2017 at 6:50 pm #

    In your previous blog(http://machinelearningmastery.com/time-series-prediction-lstm-recurrent-neural-networks-python-keras/),
    you use “trainX = numpy.reshape(trainX, (trainX.shape[0], 1, trainX.shape[1]))”(1),
    and now you use “X = X.reshape(X.shape[0], timesteps, 1)” (2).
    If the second parameter means timestamp. then in (1), you may use “look_back” in that article instead of 1
    If the third parameter means one var, then in (1), you may use 1 instead of trainX.shape[1], because trainX.shape[1] means look_back or timesteps in this article.

    • Jason Brownlee June 2, 2017 at 12:01 pm #

      I would recommend using past observations as timesteps when inputting to the model.

  7. Birkey June 2, 2017 at 2:22 pm #

    Could it be overfitting with more neurons? since more neurons means more degrees of freedom, so the model can (over) fit the training data well, while generalize poorly.

    If that’s the case, more epochs won’t help though, we need more training data.

  8. Roger July 9, 2017 at 1:16 am #

    Hi Jason – thank you for the great content. Really enjoyed your ML Python recipes. I am having some trouble understanding the structure of the input data for the LSTM, since everywhere I look seems to suggest something different.

    I understand that the input X has the shape (samples, timesteps, features). My use case is I have about 100 time series, and I’m trying to use them as features to forecast another time series 5 steps ahead at a time (updating as new information in the rolling window method you detailed in a different post). What will the structure of X look like in my case? I currently have something like this:

    X Y
    [[t0, t1, t2], [[t3, t4]
    [t1, t2, t3], [t4, t5]
    … …

    for each feature, which I’ve then stacked together into a 3D shape using np.stack( ). But it seems like this is incorrect, since the timesteps should be 2, not 3? Am I coming at this the right way? The timestep/feature/lag confusion seems to be prevalent on the Internet. Also each feature might have greater predictive power at different lag/leads, will this LSTM setup potentially bottleneck my accuracy, and is there a better approach to this? Thanks!!

    • Jason Brownlee July 9, 2017 at 10:55 am #

      If you have 5 series then that would be 5 features.

      I would recommend loading the data as a 2d matrix then using reshape, perhaps with 1 sample.

      Does that help?

  9. Nihit August 8, 2017 at 8:35 pm #

    Hi Jason, great post.
    I have been trying to implement Keras LSTM using R. How can I reshape my univariate data frame to the input shape required by LSTM in R.

    • Jason Brownlee August 9, 2017 at 6:28 am #

      Sorry, I don’t have material on using Keras in R.

      • Nihit August 9, 2017 at 3:48 pm #

        Ohh that’s unfortunate. Although I did find reshape layer in keras, but I am not sure if it is same as numpy.reshape.
        Also when i used it to train a model, it converted the train set into 3D array but now i cannot evaluate the model since I am stuck on trying to convert test set in to 3D array. Thanks.

  10. Nihit August 10, 2017 at 5:05 pm #

    I was able to fix the problem by reshaping the train set to 3D array with timesteps = 1 and including lagged values as input.
    But I cannot set timesteps more than 1.
    E.g. I have a dataset with time interval of every 15 mins. If I set timestep to 96(1 Day) and built a LSTM model then I cannot forecast on test(1 Month) set since I get only (2880/96 = ) 30 values and not 2880 values.

  11. Pablios September 22, 2017 at 6:38 pm #

    hey thank you very much for this posts !!
    they are really useful

  12. Jussi October 20, 2017 at 1:32 am #

    Hi Jason, great post.

    I have a bit of problem to understand the timeseries_to_supervised function however. It seems to shit the data so that the dataframe columns are in the following order [t, t-1, t-2, t-3, t-4, t+1]. T+1 will be used as y, but the rest of the columns are to my understanding in a wrong order (i.e, newest data point is first and vice versa)… Or did I miss something important.

    • bjuthjliu December 22, 2017 at 9:00 pm #

      maybe some bugs are here

      • bjuthjliu December 22, 2017 at 9:05 pm #

        I think the function is so

  13. Chris November 13, 2017 at 7:08 pm #

    I have used a few prints to better understand my question.
    First 6 rows:
    0 -120.1
    1 37.2
    2 -63.8
    3 61.0
    4 -11.8
    5 63.3

    First 6 rows after supervised:
    0 0 0
    0 NaN NaN -120.1
    1 -120.1 NaN 37.2
    2 37.2 -120.1 -63.8
    3 -63.8 37.2 61.0
    4 61.0 -63.8 -11.8
    5 -11.8 61.0 63.3

    And then dropping the NAN’s:
    [[ 37.2 -120.1 -63.8]
    [ -63.8 37.2 61. ]
    [ 61. -63.8 -11.8]
    [ -11.8 61. 63.3]
    [ 63.3 -11.8 -7.3]
    [ -7.3 63.3 -31.7]
    [ -31.7 -7.3 -69.9]
    …..]

    Then scaling:
    [[ 0.04828702 -0.83250961 -0.496628 ]
    [-0.496628 0.03130148 0.17669274]
    [ 0.17669274 -0.52333882 -0.21607769]
    [-0.21607769 0.1619989 0.1891017 ]
    [ 0.1891017 -0.23778144 -0.1917993 ]
    [-0.1917993 0.17462932 -0.32344214]
    …… ]

    Then dividing into X and y:
    X: [[ 0.04828702 -0.83250961]
    [-0.496628 0.03130148]
    [ 0.17669274 -0.52333882]
    [-0.21607769 0.1619989 ]
    [ 0.1891017 -0.23778144]
    [-0.1917993 0.17462932]
    …..]

    y: [-0.496628 0.17669274 -0.21607769 0.1891017 -0.1917993 -0.32344214…]

    Then reshaping:
    [[[ 0.04828702]
    [-0.83250961]]

    [[-0.496628 ]
    [ 0.03130148]]

    [[ 0.17669274]
    [-0.52333882]]

    [[-0.21607769]
    [ 0.1619989 ]]

    [[ 0.1891017 ]
    [-0.23778144]]

    [[-0.1917993 ]
    [ 0.17462932]]
    ….]

    Now here is my problem and I think it’s the same as in Jussi’s post.

    I’m not sure if I’m right, but if I take for example, the first one, I would use:
    [[[ 0.04828702]
    [-0.83250961]]]
    for X and
    [-0.496628]
    for y, so that i have t2,t1 -> t3
    Then I think that I would learn the wrong order, if the timesteps are learned from top to bottom.
    Should the order not be as follows:
    [[[-0.83250961]
    [ 0.04828702]]]
    for X and
    [-0.496628]
    for y, so that i have t1,t2 -> t3

    Am I right, that this is wrong? Thank you

  14. joseph January 18, 2018 at 4:26 pm #

    Hi jason,

    i’m getting this error ValueError: time data ‘1901-Jan’ does not match format ‘%Y-%m’. is there something wrong with it?

    • Jason Brownlee January 19, 2018 at 6:28 am #

      Ensure that you remove the footer from the data file.

  15. Swapnil Rai February 2, 2018 at 5:06 pm #

    I am bit confused with case timesteps=1. Isn’t using timestep=1 same as using traditional Neural Network?

  16. ryan February 24, 2018 at 1:05 pm #

    Hi Jason, thanks for your great post,i have learned much.

    I have a question about the difference, is it necessary to do difference(to make stationary)? In time series analysis it is necessary, but in here Neural Network why should we do that?

    • Jason Brownlee February 25, 2018 at 7:40 am #

      Generally, making the problem simpler makes it easier to model which in turn makes the forecast more accurate.

  17. David March 14, 2018 at 11:07 am #

    Hi Jason, thanks for sharing this post with us. I am doing similar forecasting analysis and really enjoy reading it.

    I have question on the preprocessing part. Please correct if I missed something: I noticed that there is a time shift for all the features including the predicted value(y), and this method also is applied on the test set. From what I saw is that you first do time shift, then do train-test split and finally generate input and output, which will result in a y(t-1) feature for the last “real-value” output.

    If this is exact what I thought, I think there should not be a time shift in the test set because the output value doesn’t even exist before you make a prediction on this time stamp. And this may cause that the test set already contains the real value for prediction in its feature.

    What I thought is that you only can make predictions step by step in the test set. First generate the prediction of the first time stamp, and use that prediction in computing the output of next time stamp.

    Please correct me if I missed something. And this problem has been lingering in my head for several days.

    Thanks!

  18. George Kibirige March 26, 2018 at 1:06 am #

    Hi Jason,
    Thanks for your tutorial really helpful. Do you have other tutorials in the same domain but in Convolutional LSTM in Keras?

  19. Steve A. April 16, 2018 at 9:24 pm #

    Jason,

    First of all, thanks for all your work here – I simply could not have made the progress I have without this site.

    With your help I’ve got MLPs, LSTMs and Bidirectional LSTMs up and running. However my LSTMs are all single timestep and it is the multi-timestep step I now want to crack.

    I’ve been looking at this code at each stage to see how you build the data and you’ve got me scratching my head. In short, it looks to me as if the time sequences are wrongly ordered:

    (borrowing from Chris’ post further up:

    First 6 rows:
    0 -120.1
    1 37.2
    2 -63.8
    3 61.0
    4 -11.8
    5 63.3

    First 6 rows after supervised:
    0 0 0
    0 NaN NaN -120.1
    1 -120.1 NaN 37.2
    2 37.2 -120.1 -63.8
    3 -63.8 37.2 61.0
    4 61.0 -63.8 -11.8
    5 -11.8 61.0 63.3

    And then dropping the NAN’s:
    [[ 37.2 -120.1 -63.8]
    [ -63.8 37.2 61. ]
    [ 61. -63.8 -11.8]
    [ -11.8 61. 63.3]
    [ 63.3 -11.8 -7.3]
    [ -7.3 63.3 -31.7]
    [ -31.7 -7.3 -69.9]
    …..]

    My issue is that the first three timestep values are: -120.1, 37.2 and -63.8 and yet the first timestep sequence is: [ 37.2 -120.1 -63.8 ] when I would expect it to be: [-120.1 37.2 -63.8 ].

    All the other timestep sequences follow the same pattern (of course). Am I completely misunderstanding this (which is perfectly possible of course) in which case what rule should be followed, especially as the no. of timesteps is increased?

    Once again, thanks for a great site and looking forward to your response

    Regards,

    Steve

  20. Steve A. April 16, 2018 at 10:01 pm #

    Further to my previous post. If I’m right and I’m quite prepared to be wrong ….

    A simple tweak to timeseries_to_supervised changing:

    columns = [df.shift(i) for i in range(1, lag+1)]

    to …

    columns = [df.shift(i) for i in range(lag, 0, -1)]

    … produces the result:

    [-120.1, 37.2, -63.8],
    [ 37.2, -63.8, 61. ],
    [ -63.8, 61. , -11.8],
    [ 61. , -11.8, 63.3],
    [ -11.8, 63.3, -7.3],
    [ 63.3, -7.3, -31.7],

    … which makes more sense to me in terms of presenting the steps in the original order of the input data.

    One other observation: this result is produced using a timesteps value of 2. A more ‘understandable’ approach might be to tweak the code so that in this case the output is produced with a timesteps value of 3 but that’s just me and my OCD 🙂

    I look forward to your response whether I’m right or wrong – I just want to learn the correct way to pre-process my data in order to get what I hope will be the best performance from my LSTM models.

    Regards,

    Steve

  21. hadeer May 21, 2018 at 6:47 pm #

    what is the benefits of using timestep for example (N=3) in rnn.. and does N=3 is better than using a timestep of N=1??

  22. aaron May 23, 2018 at 8:49 pm #

    What is difference between timesteps and unrolling an lstm network ? When you see the classical picture of an unrolled lstm does this something has to do with timesteps?

  23. Toly June 1, 2018 at 1:57 am #

    What do you recommend now? Should the time steps be as lag variable as n additional features or would prefer the internal time step functionality of the LSTM network in keras? I hope I have not overlooked such a recommendation, but you may be able to give me the clarity.

    • Jason Brownlee June 1, 2018 at 8:23 am #

      Try both and see what works best for your problem.

      Also, start with an MLP and only use LSTM if it outperforms the MLP.

  24. Ansh July 18, 2018 at 6:49 am #

    I am following you post from past 2 months. Thank you for writing such an amazing block.

    I was just trying to understand how lags work in a LSTM model, which you explained quite well.

    A question came to my mind when I read https://machinelearningmastery.com/multi-class-classification-tutorial-keras-deep-learning-library/ post with regards to what you explained above.

    If I want to create lags to predict multi classification time series problem and I ahve three classes in my predictor variable. Do I need to do One hot encoding first and then create time lags on all three variables or do I need to create time lags first and then do one hot encoding over the predictor variable.

  25. Nii Anyetei August 25, 2018 at 8:18 pm #

    Trying to do waste generation prediction for a time series dataset using lstms. Any ideas of datasets that could help and codes will be grateful

  26. keras_tf September 17, 2018 at 2:55 am #

    lets say i have a data set with 1000 rows and 6 features out of which i want to make prediction.I want to find the regressive value ‘y’ at each row.But i want the network to remember 10 previous observations to make the next observation.How should i change the data

    [1000,1,5]
    or
    [100,10,5]

    • Jason Brownlee September 17, 2018 at 6:31 am #

      Sounds like you want 10 time steps. The second example.

  27. tmartin September 23, 2018 at 10:14 pm #

    Hi Janson,

    Thanks for those tuto on LSTM !

    There are still a few points that remain unclear for me :

    Taking a univariate toy example where you try to predict the next Temperature, if you decide to include lag observations :

    1. is it better to include them as features part of a unique timestep or to consider them as several timesteps with one features ?
    2. what is the impact on the network training process ?

    Thanks in advance for you help on that,
    Regards

    • Jason Brownlee September 24, 2018 at 6:11 am #

      LSTMs can read multiple timesteps as input.

      Therefore, we can provide multiple lag observations to the model and have it predict the next step.

  28. Fredy October 12, 2018 at 2:41 am #

    Hi Jason,
    Thank you very much for this tutorial. I have one question about lag observations. Is it possible to apply lag observations for sequence classification in LSTM? I mean, given an X=[X1, X2,…, Xn] input sequence, I want to classify current Xt refer also to past observations.

    Thanks in advance

    Best regards

    • Jason Brownlee October 12, 2018 at 6:43 am #

      Yes, lag observations are input as timesteps to the LSTM.

  29. dy October 17, 2018 at 7:14 pm #

    hi jason, can you explain briefly what is time-step?

    • Jason Brownlee October 18, 2018 at 6:26 am #

      A sample is a sequence.
      A timestep is a point at which observations can be made.
      A feature is an observation at a time step.

  30. Jaime October 20, 2018 at 11:22 pm #

    Hi, I’m working on a time-series prediction system lately. And I wondered the timestep used for the inputs could change to the output, for example enter 10 timesteps and optimize the prediction for 3 timesteps.
    Is that possible?
    Now if I enter 10 timesteps I can only get 10 at the output, and I can only optimize the last one (return sequence = false) or optimize all of them (return sequence = true).
    There would be some way to optimize only for 3 timesteps

  31. Abderrahim November 16, 2018 at 9:46 pm #

    Hi Jason,
    I thank you so much for this tutorial, I have a question that I posted in SO if you don’t mind passing by, https://datascience.stackexchange.com/q/41305/33279 , my specific case is that for each train data for each time point, there are many entries. each is a learning entry, so I can’t sample (group) and sum, mean, max or min any entry. Knowing that time index was construct from two columns: year and month, and I want learn from other features along with these two columns.
    Many thanks !

    • Jason Brownlee November 17, 2018 at 5:47 am #

      Perhaps you can summarize your problem in a sentence or two?

  32. Kim November 25, 2018 at 7:13 pm #

    Hi, Jason.
    The LSTM posts you have posted are very useful.
    However, since the accuracy of the model cannot be printed, it is questionable in reliability.
    Data is only output in late 500 to late 600 values in the range of 500 to 10000.
    Can you tell me how to increase the scope of data in these areas and how to add the output of accuracy?

    Thanks, Jason.

  33. Elton December 20, 2018 at 1:58 pm #

    Hi Jason,

    Thank you for another great post and for sharing your knowledge with the community.

    I’m having some difficulty to understand the difference between an LSTM with 1 timestep and an MLP. Although everything I read so far indicates that they should have the same behavior, in my experimental results the LSTM with 1 timestep performs significantly better than the MLP. Also, increasing the timestep to values greater than 1 does not improve performance.

    I’m training the networks with batches of 128 samples with 126 features each. Is it possible that the LSTM training is behaving as if the number of timesteps was equal to the batch size?

    Thank you for your help!

    • Jason Brownlee December 20, 2018 at 2:03 pm #

      The main difference is the internals of the nodes, and the shared internal state of the units over the samples in the batch.

  34. Yarong March 14, 2019 at 7:58 am #

    Hi Jason, what if my time series stamp is not unique. For example, at one specific time stamp, I have different combination of features and the corresponding outputs. Thanks!

    • Jason Brownlee March 14, 2019 at 9:30 am #

      Perhaps normalize the dataset so you have the same features at all time steps, even if some are 0 values?

  35. Quentin May 24, 2019 at 12:28 am #

    Hi Jason,

    First of all, thank you so much for this amazing website. Although this is the first time I am posting a comment, I have been using its many resources for a while now !

    I have a question about LSTM in time series prediction tasks. It is my understanding that one of the advantage of LSTMs is the capacity to remember past examples and control what is stored in memory with the gated units. Knowing this, how come that we need to include past features and why can’t we limit ourselves to using only one feature corresponding to the last timestep that we have? I know that using larger timesteps increases the performance, my question is why?

    Thank you

    • Jason Brownlee May 24, 2019 at 7:54 am #

      The time steps define the past you are providing to the model that it might remember for future predictions.

      Perhaps I don’t understand your question?

      • Quentin May 25, 2019 at 4:54 am #

        My conception of how LSTMs and RNNs process data was flawed, but I now understand why it is necessary. The only concept of time in a LSTM is with regard to one sequence. How all the different sequences fit together is not relevant.

        There is still something that bothers me though: how can we process sequences of variable length if the LSTM expects a fixed size sequence? Of course, we can use padding but I see this more as a workaround than as a real solution… Any idea?

        • Jason Brownlee May 25, 2019 at 7:52 am #

          LSTMs can be used dynamically, e.g. one time step at a time, but it very slow.

          So, instead we vectorize the data into fixed length sequences, use padding and use a masking layer to ignore the padded values.

  36. Guyomard June 26, 2019 at 8:29 pm #

    Hi !
    I can’t understand your “timeseries_to_supervise function” .
    Why time-steps are like for instance, ( t-1,t-2,t-3,t) and not (t-3,t-2,t-1,t) ?
    Our sequences are in this way : in X : (t-1,t-2,t-3) and y : (t) and it seems weird to me.
    Thanks

  37. Brunette July 3, 2019 at 1:54 am #

    Hi Jason,
    Thank you for this post, I have a question, I have an LSTM for predicting energy consumption using time series, I thought I had a good model because I had an RMSE of 0.005 but when I tried to calculate the R2 score I had 0.44. Do you think that this is a bad model and I should change it? or should I just try to modify the architecture of my LSTM and tune my model.

    • Jason Brownlee July 3, 2019 at 8:37 am #

      I recommend comparing the skill of the model to a baseline model, such as a naive persistence forecast. Skill is relative to the baseline.

  38. tong August 7, 2019 at 11:36 am #

    when we test with a trained lstm model, do we need to make the test timestep is the same as train

  39. Thiago August 20, 2019 at 5:03 am #

    Hi Jason.
    Thanks for the post.
    I didn’t understant why we need to run one epoch 500 times on a loop instead of 500 epochs.
    If i change the looped activation function to tanh (i think the default is sigmoid, is it?), could I use 500 epochs?
    Thanks.

    • Thiago August 20, 2019 at 5:06 am #

      When I wrote looped activations function, I meant ‘recurrent_activation’ parameter from keras LSTM layer, which is set to default ‘hard_sigmoid’.
      Thanks.

    • Jason Brownlee August 20, 2019 at 6:28 am #

      They are equivalent.

Leave a Reply