[New Book] Click to get The Beginner's Guide to Data Science!
Use the offer code 20offearlybird to get 20% off. Hurry, sale ends soon!

How to Use Features in LSTM Networks for Time Series Forecasting

The Long Short-Term Memory (LSTM) network in Keras supports multiple input features.

This raises the question as to whether lag observations for a univariate time series can be used as features for an LSTM and whether or not this improves forecast performance.

In this tutorial, we will investigate the use of lag observations as features in LSTM models in Python.

After completing this tutorial, you will know:

  • How to develop a test harness to systematically evaluate LSTM features for time series forecasting.
  • The impact of using a varied number of lagged observations as input features for LSTM models.
  • The impact of using a varied number of lagged observations and matching numbers of neurons for LSTM models.

Kick-start your project with my new book Deep Learning for Time Series Forecasting, including step-by-step tutorials and the Python source code files for all examples.

Let’s get started.

  • Updated Apr/2019: Updated the link to dataset.
How to Use Features in LSTM Networks for Time Series Forecasting

How to Use Features in LSTM Networks for Time Series Forecasting
Photo by Tom Hodgkinson, some rights reserved.

Tutorial Overview

This tutorial is divided into 4 parts. They are:

  1. Shampoo Sales Dataset
  2. Experimental Test Harness
  3. Experiments with Timesteps
  4. Experiments with Timesteps and Neurons

Environment

This tutorial assumes you have a Python SciPy environment installed. You can use either Python 2 or 3 with this example.

This tutorial assumes you have Keras v2.0 or higher installed with either the TensorFlow or Theano backend.

This tutorial also assumes you have scikit-learn, Pandas, NumPy, and Matplotlib installed.

If you need help setting up your Python environment, see this post:

Need help with Deep Learning for Time Series?

Take my free 7-day email crash course now (with sample code).

Click to sign-up and also get a free PDF Ebook version of the course.

Shampoo Sales Dataset

This dataset describes the monthly number of sales of shampoo over a 3-year period.

The units are a sales count and there are 36 observations. The original dataset is credited to Makridakis, Wheelwright, and Hyndman (1998).

The example below loads and creates a plot of the loaded dataset.

Running the example loads the dataset as a Pandas Series and prints the first 5 rows.

A line plot of the series is then created showing a clear increasing trend.

Line Plot of Shampoo Sales Dataset

Line Plot of Shampoo Sales Dataset

Next, we will take a look at the LSTM configuration and test harness used in the experiment.

Experimental Test Harness

This section describes the test harness used in this tutorial.

Data Split

We will split the Shampoo Sales dataset into two parts: a training and a test set.

The first two years of data will be taken for the training dataset and the remaining one year of data will be used for the test set.

Models will be developed using the training dataset and will make predictions on the test dataset.

The persistence forecast (naive forecast) on the test dataset achieves an error of 136.761 monthly shampoo sales. This provides a lower acceptable bound of performance on the test set.

Model Evaluation

A rolling-forecast scenario will be used, also called walk-forward model validation.

Each time step of the test dataset will be walked one at a time. A model will be used to make a forecast for the time step, then the actual expected value from the test set will be taken and made available to the model for the forecast on the next time step.

This mimics a real-world scenario where new Shampoo Sales observations would be available each month and used in the forecasting of the following month.

This will be simulated by the structure of the train and test datasets.

All forecasts on the test dataset will be collected and an error score calculated to summarize the skill of the model. The root mean squared error (RMSE) will be used as it punishes large errors and results in a score that is in the same units as the forecast data, namely monthly shampoo sales.

Data Preparation

Before we can fit an LSTM model to the dataset, we must transform the data.

The following three data transforms are performed on the dataset prior to fitting a model and making a forecast.

  1. Transform the time series data so that it is stationary. Specifically, a lag=1 differencing to remove the increasing trend in the data.
  2. Transform the time series into a supervised learning problem. Specifically, the organization of data into input and output patterns where the observation at the previous time step is used as an input to forecast the observation at the current time step
  3. Transform the observations to have a specific scale. Specifically, to rescale the data to values between -1 and 1 to meet the default hyperbolic tangent activation function of the LSTM model.

These transforms are inverted on forecasts to return them into their original scale before calculating and error score.

LSTM Model

We will use a base stateful LSTM model with 1 neuron fit for 500 epochs.

A batch size of 1 is required as we will be using walk-forward validation and making one-step forecasts for each of the final 12 months of test data.

A batch size of 1 means that the model will be fit using online training (as opposed to batch training or mini-batch training). As a result, it is expected that the model fit will have some variance.

Ideally, more training epochs would be used (such as 1000 or 1500), but this was truncated to 500 to keep run times reasonable.

The model will be fit using the efficient ADAM optimization algorithm and the mean squared error loss function.

Experimental Runs

Each experimental scenario will be run 10 times.

The reason for this is that the random initial conditions for an LSTM network can result in very different results each time a given configuration is trained.

Let’s dive into the experiments.

Experiments with Features

We will perform 5 experiments; each will use a different number of lag observations as features from 1 to 5.

A representation with a 1 input feature would be the default representation when using a stateful LSTM. Using 2 to 5 features is contrived. The hope would be that the additional context from the lagged observations may improve performance of the predictive model.

The univariate time series is converted to a supervised learning problem before training the model. The specified number of features defines the number of input variables (X) used to predict the next observation (y). As such, for each feature used in the representation, that many rows must be removed from the beginning of the dataset. This is because there are no prior observations to use as features for the first values in the dataset.

The complete code listing for testing 1 input feature is provided below.

The features parameter in the run() function is varied from 1 to 5 for each of the 5 experiments. In addition, the results are saved to file at the end of the experiment and this filename must also be changed for each different experimental run, e.g. experiment_features_1.csv, experiment_features_2.csv, etc.

Run the 5 different experiments for the 5 different numbers of features.

You can run them in parallel if you have sufficient memory and CPU resources. GPU resources are not required for these experiments and runs should be complete in minutes to tens of minutes.

After running the experiments, you should have 5 files containing the results, as follows:

  • experiment_features_1.csv
  • experiment_features_2.csv
  • experiment_features_3.csv
  • experiment_features_4.csv
  • experiment_features_5.csv

We can write some code to load and summarize these results.

Specifically, it is useful to review both descriptive statistics from each run and compare the results for each run using a box and whisker plot.

Code to summarize the results is listed below.

Running the code first prints descriptive statistics for each set of results.

Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.

We can see from the average performance alone that the default of using a single feature resulted in the best performance. This is also shown when reviewing the median test RMSE (50th percentile).

A box and whisker plot comparing the distributions of results is also created.

The plot tells the same story as the descriptive statistics. The test RMSE seems to leap up with 2 features and trend upward as the number of features is increased.

Box and Whisker Plot of Test RMSE vs The Number of Input Features

Box and Whisker Plot of Test RMSE vs The Number of Input Features

The expectation of decreased error with the increase of features was not observed, at least with the dataset and LSTM configuration used.

This raises the question as to whether the capacity of the network is a limiting factor. We will look at this in the next section.

Experiments with Features and Neurons

The number of neurons (also called units) in the LSTM network defines its learning capacity.

It is possible that in the previous experiments the use of one neuron limited the learning capacity of the network such that it was not capable of making effective use of the lagged observations as features.

We can repeat the above experiments and increase the number of neurons in the LSTM with the increase in features and see if it results in an increase in performance.

This can be achieved by changing the line in the experiment function from:

to

In addition, we can keep the results written to file separate from the results from the first experiment by adding a “_neurons” suffix to the filenames, for example, changing:

to

Repeat the same 5 experiments with these changes.

After running these experiments, you should have 5 result files.

  • experiment_features_1_neurons.csv
  • experiment_features_2_neurons.csv
  • experiment_features_3_neurons.csv
  • experiment_features_4_neurons.csv
  • experiment_features_5_neurons.csv

As in the previous experiment, we can load the results, calculate descriptive statistics, and create a box and whisker plot. The complete code listing is below.

Running the code first prints descriptive statistics from each of the 5 experiments.

Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.

The results tell a different story to the first set of experiments with a one neuron LSTM. The average test RMSE appears lowest when the number of neurons and the number of features is set to one, then error increases as neurons and features are increased.

A box and whisker plot is created to compare the distributions.

The trend in spread and median performance almost shows a linear increase in test RMSE as the number of neurons and input features is increased.

The linear trend may suggest that the increase network capacity is not given sufficient time to fit the data. Perhaps an increase in the number of epochs would be required as well.

Box and Whisker Plot of Test RMSE vs The Number of Neurons and Input Features

Box and Whisker Plot of Test RMSE vs The Number of Neurons and Input Features

Experiments with Features and Neurons More Epochs

In this section, we repeat the above experiment to increase the number of neurons with the number of features but double the number of training epochs from 500 to 1000.

This can be achieved by changing the line in the experiment function from:

to

In addition, we can keep the results written to file separate from the results from the previous experiment by adding a “1000” suffix to the filenames, for example, changing:

to

Repeat the same 5 experiments with these changes.

After running these experiments, you should have 5 result files.

  • experiment_features_1_neurons1000.csv
  • experiment_features_2_neurons1000.csv
  • experiment_features_3_neurons1000.csv
  • experiment_features_4_neurons1000.csv
  • experiment_features_5_neurons1000.csv

As in the previous experiment, we can load the results, calculate descriptive statistics, and create a box and whisker plot. The complete code listing is below.

Running the code first prints descriptive statistics from each of the 5 experiments.

Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.

The results tell a very similar story to the previous experiment with half the number of training epochs. On average, a model with 1 input feature and 1 neuron outperformed the other configurations.

A box and whisker plot was also created to compare the distributions. In the plot, we see the same trend as was clear in the descriptive statistics.

At least on this problem and with the chosen LSTM configuration, we do not see any clear benefit in increasing the number of input features.

Box and Whisker Plot of Test RMSE vs The Number of Neurons and Input Features and 1000 Epochs

Box and Whisker Plot of Test RMSE vs The Number of Neurons and Input Features and 1000 Epochs

Extensions

This section lists some areas for further investigation that you may consider exploring.

  • Diagnostic Run Plots. It may be helpful to review plots of train and test RMSE over epochs for multiple runs for a given experiment. This might help tease out whether overfitting or underfitting is taking place, and in turn, methods to address it.
  • Increase Repeats. Using 10 repeats results in a relatively small population of test RMSE results. It is possible that increasing repeats to 30 or 100 (or even higher) may result in a more stable outcome.

Did you explore any of these extensions?
Share your findings in the comments below; I’d love to hear what you found.

Summary

In this tutorial, you discovered how to investigate using lagged observations as input features in an LSTM network.

Specifically, you learned:

  • How to develop a robust test harness for experimenting with input representation with LSTMs.
  • How to use lagged observations as input features for time series forecasting with LSTMs.
  • How to increase the learning capacity of the network with the increase of input features.

You discovered that the expectation that “the use of lagged observations as input features improves model skill” did not decrease the test RMSE on the chosen problem and LSTM configuration.

Do you have any questions?
Ask your questions in the comments below and I will do my best to answer them.

Develop Deep Learning models for Time Series Today!

Deep Learning for Time Series Forecasting

Develop Your Own Forecasting models in Minutes

...with just a few lines of python code

Discover how in my new Ebook:
Deep Learning for Time Series Forecasting

It provides self-study tutorials on topics like:
CNNs, LSTMs, Multivariate Forecasting, Multi-Step Forecasting and much more...

Finally Bring Deep Learning to your Time Series Forecasting Projects

Skip the Academics. Just Results.

See What's Inside

49 Responses to How to Use Features in LSTM Networks for Time Series Forecasting

  1. Avatar
    Zach May 5, 2017 at 7:40 am #

    When using multiple features, is a stateful model the same as a stateless one with timesteps? Can you use timesteps in a stateful model with multiple features?

  2. Avatar
    Jason Ho May 5, 2017 at 1:30 pm #

    Hi,Dr.Jason Brownlee.I am confused that difference between time_steps and features in a LSTM input of Keras. Can you explain with an example?

    • Avatar
      Jason Brownlee May 6, 2017 at 7:32 am #

      Great question.

      Time steps are the past observations that the network will learn from (e.g. backpropagation through time). I have a few posts on BPTT coming out on the blog soon.

      Features are different measurements taken at a given time step (e.g. pressure and temperature).

      Does that help?

      • Avatar
        Jason Ho May 11, 2017 at 12:34 pm #

        Thanks for Dr.Jason Brownlee’s reply.

        So we can take time steps and features just 2 type of LSTM’s input?

        And for multi-dimension input , just one neuron in input layer , does it really work? And does the number of input layer’s neuron of LSTM need to be same as the input’s dimension(time steps’ dimension +features’ dimension ) like the traditional BP Neural Network?

        • Avatar
          Jason Brownlee May 12, 2017 at 7:33 am #

          No, the number of memory cells in the input layer does not have to match the number of input features or time steps.

      • Avatar
        Marco Aprea May 4, 2020 at 8:47 pm #

        reading the code it seems that you shape the data to give as input (X variable) such that:
        X = X.reshape(X.shape[0], 1, X.shape[1])
        but if you give more than one month sales as input shouldn’t you increase the number of time steps and keeping the number of features equal to 1 (only sales are analysed, not other features) ?
        where am I wrong?

  3. Avatar
    Jason Ho May 11, 2017 at 12:52 pm #

    Compared to traditional method ,like AR, MA and ARIMA , What does the LSTM’s advantages in Time Series Forecasting?

    • Avatar
      Jason Brownlee May 12, 2017 at 7:35 am #

      Great question.

      The promise of LSTMs (I have a scheduled post on this) is that they can learn the temporal dependence. That you just feed in sequences and do not have to specify the required window/lag (e.g. outcome from an ACF/PACF analysis).

      • Avatar
        Jason Ho May 12, 2017 at 1:59 pm #

        And except that the LTSM do not have to specify the required window/lag ,is the LSTM’s prediction accuracy better than ARIMA or the LSTM can do better than other tradition method thanks to its special model struction and deep layers?

        • Avatar
          Jason Brownlee May 13, 2017 at 6:11 am #

          It may be, that is problem specific (e.g. no free lunch).

  4. Avatar
    Siddharth May 31, 2017 at 10:51 pm #

    Thanks Jason, yet again a very well structured blog post. I am learning a lot from this set of posts on time series forecasting using sequence models. I have two questions for you:

    1) What affect does adding more layers (going deeper) to the network have? More specifically, how do you decide when to increase number of neurons and when to make the network deeper (more layers)?

    2) This might be a broad question, but specifically for time series forecasting, what “size” of data do you think is “good enough”? If I have less time series data for a problem, is it possible to transfer learn from another LSTM?

    Looking forward to your response!

    • Avatar
      Jason Brownlee June 2, 2017 at 12:50 pm #

      More layers mean more hierarchical representational capacity, but slower to train and perhaps more likely to overfit. Try different configurations on your problem.

      As much data as you can get.

      You may be able to do transfer learning, I have not read anything about it for time series with LSTMs.

  5. Avatar
    Ukesh June 14, 2017 at 6:59 am #

    Hi Jason,
    Thank you for your wonderful post.

    I am new to deep learning and LSTM. I have a very simple question. I have taken a sample of demands for 50 time steps and I am trying to forecast the demand value for the next 10 time steps using the sample to train the model.

    But unfortunately, the closest I came is splitting the sample demands into 67 training % and 33 testing % and my forecast is only forecasting for the 33%. Can anybody help me with this issue?

    I have borrowed your code and put it in the in github explaining this issue.

    Thank you in advance.

    https://github.com/ukeshchawal/hello-world/blob/master/trial.ipynb

  6. Avatar
    Rachel January 2, 2018 at 9:43 am #

    Hi Jason,

    Thank you so much for these posts; they are so helpful!

    I have a question about the rolling forecast methodology. Why would you not re-fit the LSTM model after each time step? The advantage of the walk-forward validation seems to be that the model can ‘work’ in the way reality works – when making a prediction, the model has all the observations to date up until that particular time.

    The reason why I would expect one would not only use the new time step’s most recent x data at the point of prediction, but also re-train the model to include the latest x,y information available up until that point would be if there were significant instability over time in the weights/parameters themselves, that would warrant a model that itself ‘rolls forward’, not just providing the most up to date x data.

    There would also be computational time costs.

    Am I thinking through this question correctly? That it would not really be advisable to roll forward the training set as well, re-fitting a lot of models, simply due to the fact that 1. we hope the parameters/weights are not so unstable that it would make a material difference, and 2. time costs?

    Thank you!

  7. Avatar
    Ayush February 27, 2018 at 11:17 pm #

    Hi Jason, I don’t usually comment on these but I absolutely love your code structure! No doubt I will be using this functional approach in the future – it seems debugging would be far less painful.

    I’m a newbie who’s just built his first LSTM model for stock predicting as an exercise and have found my way here trying to fix a problem. I’m using multiple features including Open price, High, Low, and Volume to predict Close price. I find that regardless of what hyperparameters I fiddle with, my prediction is always ‘lagging’ behind the real value, similar to sinusoids being out of phase with one another. Is this a problem that you have seen before with LSTM time series predictions? Perhaps something to do with the memory of the cell?

    • Avatar
      Jason Brownlee February 28, 2018 at 6:05 am #

      Thanks Ayush!

      Yes, LSTMs are not great at this type of problem. I’d recommend an MLP instead.

      • Avatar
        Annie June 10, 2019 at 11:59 pm #

        Hi Jason, I am wondering why you recommend MLP for his model instead of LSTM?

        • Avatar
          Jason Brownlee June 11, 2019 at 7:55 am #

          Generally LSTMs perform poorly for time series. I find MLPs and CNNs often perform better.

  8. Avatar
    Adam March 13, 2018 at 10:26 pm #

    Hi Jason,

    Very helpful article, thanks! I have a question regarding the number of inputs in the fit_lstm function. In the main body of the code this function takes 4 values as inputs but in the section ‘Experiments with Features and Neurons More Epochs’ there is a change to the number of inputs (from 4 to 5) in both lines of the fit_lstm function. Is that possible to provide some details regarding this change?

    • Avatar
      Jason Brownlee March 14, 2018 at 6:22 am #

      I don’t recall. Can you point to the specific code change you’re referring to?

  9. Avatar
    Thong Bui March 23, 2018 at 9:19 am #

    Hi Jason,

    Thank you for the great article. I am trying to find out if there is away to find out feature importance (which features are most important?) in the LSTM model but can’t find that from Keras’ LSTM model. Is it even possible?

    • Avatar
      Jason Brownlee March 24, 2018 at 6:17 am #

      I have not seen feature importance measures for LSTMs sorry. Perhaps try searching on scholar.google.com?

      • Avatar
        Jay K. August 23, 2020 at 9:11 pm #

        Hi Jason, can’t we use methods like Correlation Feature Selection and Mutual Information Feature Selection as you describe in your post on “Feature Selection for Regression Data” for LSTM?

  10. Avatar
    Ada April 4, 2018 at 10:06 am #

    Hello, Jason.

    I’m new in times series using LSTM and I have a specific problem there I’m trying to solve but I can’t find good material about.

    I have many instancies of specific bearings temperatures,. (About 500 bearings ). The goal it’s to find a tendency when a bearing has a problem. So, Is it possible to train my LSTM with many instancies?

    The variable temperature between each one it’s almost the same, but when the bearing has a problem, the temperature has a tendency.

    Do you know any good article?

  11. Avatar
    Iva April 25, 2018 at 8:26 pm #

    Hello Jason. I have run a similar experiment:
    – I have daily data for 2 years of amount spent, which I log transform
    – I also use the walk-forward validation but predict several days ahead
    – I run 2 scenarios:
    1. A simple one: I used as input just 200 lags of the variable I am predicting (amount) ;
    2. A more complicated one: I add to the input 1 numeric feature and 6 categorical ones such as holiday (1/0), weekend (1/0), year, month, day etc. I transform each original categorical feature into n-1 dummies and end up feeding the network with a total of 47 features of length 200 ( number of lags).
    – My training set contains 300 samples(rows), while the validation contains 60 samples;

    In the complicated scenario, the input is huge and slows down training. So, I could hardly experiment with different parameters (epochs, layers, input nodes) and come up with an optimal model. I was wondering what would you advise me on experimenting/changing right away. For example, adding more input nodes (currently set to 1000), adding more layers, decreasing the number of lags and thus increase the number of samples. I am basically troubled by the ratio of number of features to the number of samples and would adding the additional features be redundant with so few samples.

  12. Avatar
    matt April 30, 2018 at 2:30 pm #

    Hi! I hope this finds you well! I really aprpeciate your post!

    I have seen your post about walk forwarding cv: https://machinelearningmastery.com/backtest-machine-learning-models-time-series-forecasting/

    and I think I understood.

    1. expanding the train size by 1, and compare it with the next one.

    You mentioned that we need to fix the mnimum number of observation and you fixed as 500 for the first train size at that link. So train size would be 500, 501, 502,503,,…len(data)-1.

    then, as I understood, while rolling(increasing the train size) or for each rolling, are we updating the model? or just record the error?

    However, I do not see any part that you used walk forward CV here with minimum number of observation…. Or, am I missing something?

    Where is the part you fix the ‘minimum number of observation’ and update the model here?

    Thank you so much !

    • Avatar
      Jason Brownlee May 1, 2018 at 5:31 am #

      In many of the neural net cases, I do not update the model each walk forward, it’s too computationally expensive.

  13. Avatar
    yo May 30, 2018 at 6:47 am #

    Hi Jason. I was wondering what is the difference between formulating lags as extra features (as is used in this tutorial) vs using time steps to add lags.
    Basically for every sample there’s #lags time steps which contain 1 feature: the lagged y value (at x, x-1, x-2 etc.).
    I’m pretty sure I’ve seen examples using the second method but I can’t really find any explanation/comparison about this anywhere.
    I hope you can answer this!

    • Avatar
      Jason Brownlee May 30, 2018 at 3:05 pm #

      The key is BPTT operates over time steps.

      It’s the difference between learning from the sequence and learning a static input/output mapping.

      This post might help:
      https://machinelearningmastery.com/gentle-introduction-backpropagation-time/

      • Avatar
        Jay K. August 23, 2020 at 9:32 pm #

        Hi Jason, with lags as extra features, would it be correct to also use lag time steps, say each sample being L x F, where L is the lagged time steps (t, t-1, …, t-L) in each of the F input features? All but one features are lagged versions of the uni-variate input series.

  14. Avatar
    shiva May 6, 2019 at 7:05 am #

    Hi Jason,

    Given in a dataset, the dataset has high temporal variation and seasonality.
    Say, every 3,6,7,8,9 month have very high values for like a week,
    the other months have low values(peaks)

    I was able to capture the low peak values but not able to capture high peaks values.

    To be specific, I am trying to build a model for riverflows. The riverflows are high when rainfall is high.. the other times they stay low.. I am not sure how to capture the high peak signals.

    • Avatar
      Jason Brownlee May 6, 2019 at 2:31 pm #

      Perhaps you can classify observations as high/low river flow, then develop separate models for those cases?

  15. Avatar
    Babak December 19, 2019 at 8:09 pm #

    Thanks for sharing this. I wonder if you have any tutorial on the unsupervised approach – auto-econder on a data with a similar format ?

  16. Avatar
    Elena January 23, 2020 at 4:08 pm #

    I am working with grid workload archive dataset ,while i am predicting the incoming job waiting time ,how can i consider input variable to predict the output variable .. I does have requested time , average cpu used and few attributes , can lstm train the input variable to give output variable

  17. Avatar
    Harry April 15, 2020 at 9:20 am #

    Hi Dr Brownlwee,

    Thank you for another detailed blog, your blogs has been tremendously helpful.

    I have a question regarding the theoretical usefulness of increasing the input times steps for univariate data prediction using LSTM, does all past time steps not automatically get carried forward through the structure of LSTM?

    Background:
    I’m comparing several methods for hourly water demand prediction, differing to the shampoo data, the hourly demand data is much longer and there is a great sense of periodicity within the data itself.
    Neural network and random forecast have all performed well when the input time step is equal or greater than the data period (24 hours), I initially assumed the same for LSTM, but from my limited understanding, past features are automatically carried forward through all blocks, thus limiting the need for repeating features; however, my work has found that increasing the input time step count to 24 massively increases prediction accuracy, similar to neural network and random forecast.
    I have started LSTM from another one of your blog from 2016 (Time Series Prediction with LSTM Recurrent Neural Networks in Python with Keras), I’ve modified its structure to support multiple input as well as output, I’m currently reaffirming my previous results with the code from this blog.

    • Avatar
      Jason Brownlee April 15, 2020 at 1:20 pm #

      You’re welcome.

      Theory is not going to help – I recommend designing controlled experiments in order to discover what works best for your specific dataset and model.

  18. Avatar
    amin August 8, 2020 at 5:35 pm #

    Hi Dr. Brownlwee

    Thank you for your blog, I get confused and I do not understand about your second plot and how to recognize by increasing the feature the error does not decrease (in test RMSE)

    • Avatar
      Jason Brownlee August 9, 2020 at 5:36 am #

      We can see that RMSE is not decreased because the distributions and medians skew higher with the increase in the number of features.

  19. Avatar
    Israt February 28, 2021 at 10:10 pm #

    Hi Jason, i am new to LSTM and i was reading your other blog
    Feature Selection for Time Series Forecasting with Python
    https://machinelearningmastery.com/feature-selection-time-series-forecasting-python/

    where you mentioned ” Generally we would not select lag obs as the time steps for the LSTM, instead we would provide all of the time steps and lear the LSTM learn what to use to make good predictions.” and next “It can be useful for linear models, and when developing static ML models (not LSTM).”

    So before i try to fully understand this blog, i would really appreciate your comment to help me understand this 2 blog “How to use lagged observations as input features for time series forecasting with LSTMs”. Thank you

Leave a Reply