How to Make Predictions with Long Short-Term Memory Models in Keras

The goal of developing an LSTM model is a final model that you can use on your sequence prediction problem.

In this post, you will discover how to finalize your model and use it to make predictions on new data.

After completing this post, you will know:

  • How to train a final LSTM model.
  • How to save your final LSTM model, and later load it again.
  • How to make predictions on new data.

Kick-start your project with my new book Long Short-Term Memory Networks With Python, including step-by-step tutorials and the Python source code files for all examples.

Let’s get started.

How to Make Predictions with Long Short-Term Memory Models with Keras

How to Make Predictions with Long Short-Term Memory Models with Keras
Photo by damon jah, some rights reserved.

Step 1. Train a Final Model

What Is a Final LSTM Model?

A final LSTM model is one that you use to make predictions on new data.

That is, given new examples of input data, you want to use the model to predict the expected output. This may be a classification (assign a label) or a regression (a real value).

The goal of your sequence prediction project is to arrive at a final model that performs the best, where “best” is defined by:

  • Data: the historical data that you have available.
  • Time: the time you have to spend on the project.
  • Procedure: the data preparation steps, algorithm or algorithms, and the chosen algorithm configurations.

In your project, you gather the data, spend the time you have, and discover the data preparation procedures, algorithm to use, and how to configure it.

The final model is the pinnacle of this process, the end you seek in order to start actually making predictions.

There is no such thing as a perfect model. There is only the best model that you were able to discover.

How to Finalize an LSTM Model?

You finalize a model by applying the chosen LSTM architecture and configuration on all of your data.

There is no train and test split and no cross-validation folds. Put all of the data back together into one large training dataset and fit your model.

That’s it.

With the finalized model, you can:

  • Save the model for later or operational use.
  • Load the model and make predictions on new data.

For more on training a final model, see the post:

Need help with LSTMs for Sequence Prediction?

Take my free 7-day email course and discover 6 different LSTM architectures (with code).

Click to sign-up and also get a free PDF Ebook version of the course.

Step 2. Save Your Final Model

Keras provides an API to allow you to save your model to file.

The model is saved in HDF5 file format that efficiently stores large arrays of numbers on disk. You will need to confirm that you have the h5py Python library installed. It can be installed as follows:

You can save a fit Keras model to file using the save() function on the model.

For example:

This single file will contain the model architecture and weights. It also includes the specification of the chosen loss and optimization algorithm so that you can resume training.

The model can be loaded again (from a different script in a different Python session) using the load_model() function.

Below is a complete example of fitting an LSTM model, saving it to a single file and later loading it again. Although the loading of the model is in the same script, this section may be run from another script in another Python session. Running the example saves the model to the file lstm_model.h5.

For more on saving and loading your Keras model, see the post:

Step 3. Make Predictions on New Data

After you have finalized your model and saved it to file, you can load it and use it to make predictions.

For example:

  • On a sequence regression problem, this may be the prediction of the real value at the next time step.
  • On a sequence classification problem, this may be a class outcome for a given input sequence.

Or it may be any other variation based on the specifics of your sequence prediction problem. You would like an outcome from your model (yhat) given an input sequence (X) where the true outcome for the sequence (y) is currently unknown.

You may be interested in making predictions in a production environment, as the backend to an interface, or manually. It really depends on the goals of your project.

Any data preparation performed on your training data prior to fitting your final model must also be applied to any new data prior to making predictions.

Predicting is the easy part.

It involves taking the prepared input data (X) and calling one of the Keras prediction methods on the loaded model.

Remember that the input for making a prediction (X) is only comprised of the input sequence data required to make a prediction, not all prior training data. In the case of predicting the next value in one sequence, the input sequence would be 1 sample with the fixed number of time steps and features used when you defined and fit your model.

For example, a raw prediction in the shape and scale of the activation function of the output layer can be made by calling the predict() function on the model:

The prediction of a class index can be made by calling the predict_classes() function on the model.

The prediction of probabilities can be made by calling the predict_proba() function on the model.

For more on the life-cycle of your Keras model, see the post:

Further Reading

This section provides more resources on the topic if you are looking go deeper.

Posts

API

Summary

In this post, you discovered how to finalize your model and use it to make predictions on new data.

Specifically, you learned:

  • How to train a final LSTM model.
  • How to save your final LSTM model, and later load it again.
  • How to make predictions on new data.

Do you have any questions?
Ask your questions in the comments below and I will do my best to answer.

Develop LSTMs for Sequence Prediction Today!

Long Short-Term Memory Networks with Python

Develop Your Own LSTM models in Minutes

...with just a few lines of python code

Discover how in my new Ebook:
Long Short-Term Memory Networks with Python

It provides self-study tutorials on topics like:
CNN LSTMs, Encoder-Decoder LSTMs, generative models, data preparation, making predictions and much more...

Finally Bring LSTM Recurrent Neural Networks to
Your Sequence Predictions Projects

Skip the Academics. Just Results.

See What's Inside

172 Responses to How to Make Predictions with Long Short-Term Memory Models in Keras

  1. Avatar
    Klaas Brau August 30, 2017 at 2:03 am #

    Thanks Jason

    One question. Why should we finalize the model on the whole data. We would change the weights again right? The model which was trained on the training data is the one we tested on unseen data (test set). The new model (trained on all data) could be worse, overfitted,… not?

  2. Avatar
    ketan September 14, 2017 at 7:02 pm #

    I tried both keras and tensorflow. Tensorflow has more features.

  3. Avatar
    tieliu October 13, 2017 at 6:53 pm #

    thanks, Jason.

    I ran your sample code, but found the result as below, which seems not expected.

    … …
    Epoch 290/300
    6/6 [==============================] – 0s – loss: 0.0155
    Epoch 295/300
    6/6 [==============================] – 0s – loss: 0.0153
    Epoch 296/300
    6/6 [==============================] – 0s – loss: 0.0153
    Epoch 297/300
    6/6 [==============================] – 0s – loss: 0.0152
    Epoch 298/300
    6/6 [==============================] – 0s – loss: 0.0152
    Epoch 299/300
    6/6 [==============================] – 0s – loss: 0.0152
    Epoch 300/300
    6/6 [==============================] – 0s – loss: 0.0151
    [[ 0.28978038]
    [ 0.31878966]
    [ 0.3477335 ]
    [ 0.37631655]
    [ 0.4042924 ]
    [ 0.43146992]]

    Suppose input of X are 0 0.1 0.2 0.3 0.4 0.5, then predict value of y should be similar to 0.1 0.2… 0.6. but the result is such value as 0.28978038… … 0.43146992.

    can you check more about it?

    • Avatar
      Jason Brownlee October 14, 2017 at 5:43 am #

      What is the problem exactly?

      • Avatar
        Kingsley Udeh June 29, 2018 at 10:31 am #

        Hi Dr. Jason,

        Thanks so much for your tutorials.

        I will like to clarify what Tieliu’s question was:

        He was making a reference to the example cited on this page where you demonstrated making predictions with LSTM as follows:

        # return training data
        def get_train():
        seq = [[0.0, 0.1], [0.1, 0.2], [0.2, 0.3], [0.3, 0.4], [0.4, 0.5]]
        seq = array(seq)
        X, y = seq[:, 0], seq[:, 1]
        X = X.reshape((len(X), 1, 1))
        return X, y

        # define model
        model = Sequential()
        model.add(LSTM(10, input_shape=(1,1)))
        model.add(Dense(1, activation=’linear’))
        # compile model
        model.compile(loss=’mse’, optimizer=’adam’)
        # fit model
        X,y = get_train()
        model.fit(X, y, epochs=300, shuffle=False, verbose=0)
        # save model to single file
        model.save(‘lstm_model.h5’)

        # snip…
        # later, perhaps run from another script

        # load model from single file
        model = load_model(‘lstm_model.h5’)
        # make predictions
        yhat = model.predict(X, verbose=0)
        print(yhat)

        When you run the code, the trained model did not make good prediction of the actual response variable y. It has the following predicted yhat values:

        [[0.24346247]
        [0.27623463]
        [0.30942053]
        [0.34286284]
        [0.37640885]]

        Rather than the actual y values:

        [[0.1]
        [0.2]
        [0.3]
        [0.4]
        [0.5]]

        In other words, if we approximate the predicted values to 2 decimal places, the model predicted 0.2, 0.3, and 0.4, correctly, but failed in predicting 0.1 and 0.5. In this situation, could we say our final model should be discarded and we then decide to train another model with a different set of procedures and configurations?

        Sorry, this question is a bit longer than expected, but I wanted to clarify the initial question as there was no further conversation on this.

        • Avatar
          Jason Brownlee June 29, 2018 at 3:26 pm #

          Sorry, I don’t follow, why would we discard the model?

          • Avatar
            Kingsley Udeh June 29, 2018 at 7:25 pm #

            Because not all the predicted values are equal to the actual values as shown between yhat and y, or am I missing some important concept of your tutorial?

          • Avatar
            Jason Brownlee June 30, 2018 at 6:06 am #

            No model is perfect. If perfection was possible we would not need machine learning.

          • Avatar
            Kingsley Udeh June 30, 2018 at 11:12 am #

            Got it! Thanks so much.

            BWT:How do we show or tell that a model is good? Do we just only care about the scores, for example, a regression problem?

          • Avatar
            Jason Brownlee July 1, 2018 at 6:22 am #

            Correct.

  4. Avatar
    Fawad October 16, 2017 at 9:27 pm #

    Hi, I want to predict for a whole record of shape like (160, 72) for single time step. How would I shape my numpy array of features for test. For more clear understanding, I have trained my model on trainX with shape (235, 1, 72) and trainY with shape (235,). Now I want to predict a single timestep but for 160 rows. How to do that?

    • Avatar
      Jason Brownlee October 17, 2017 at 5:45 am #

      See this post on how to reshape data for LSTMs:
      https://machinelearningmastery.com/reshape-input-data-long-short-term-memory-networks-keras/

      • Avatar
        Ryan January 7, 2019 at 9:20 pm #

        Hello Jason,

        Thanks for the fantastic tutorial. It helped me trained a model for predicting energy patterns.

        That said, I am interested in putting this into a real-time prediction production site and am wondering the following:

        1. I am planning to use Flask to publish said trained model as an API to produce prediction in a live website. Is that an appropriate choice of tool for deployment?

        2. I have reshaped and scaled my raw data to produce the model… does it mean that I will need to reshape and scale all the future data prior to be fitted into the trained model for it to output relevant predictions?

        Thanks and appreciate your kind reply for my questions above!

        • Avatar
          Jason Brownlee January 8, 2019 at 6:49 am #

          Choices of tools and frameworks for production are really a decision you and your stakeholders should be making. I cannot comment in any meaningful way as I won’t be responsible for the decision.

          Any data preparation applied to training data must be applied to new data fed to the model in the future.

  5. Avatar
    joseph February 26, 2018 at 2:59 pm #

    Hi Jason,

    When performing model.predict, i do get some inconsistent outputs. Does that mean my model is wrong?as far as i know, possible scenario for inconsistent outputs is if i try to re-fit the model and not during prediction. Am i missing something? hope to get some comments from you. thank you very much

  6. Avatar
    Maryam March 18, 2018 at 8:36 am #

    Hi Jason,
    I am so grateful for the post U shared with us.
    I just want to load 3 finalized model namely RNN, CNN, LSTM in one script concurrently which have already saved as a finalized model in keras to using them in an ensemble model to gain an average result for predicting. Is it necessary to use dask data frame to load multi finalized (saved model) models?? or loading multi finalized model has the same commands as loading one finalized model?
    Thank U in advanced for taking your time to replying.

    • Avatar
      Jason Brownlee March 19, 2018 at 6:02 am #

      A Pandas DataFrame is not required. Each model can be saved and loaded to and from separate files and used in an ensemble.

  7. Avatar
    Delaram April 2, 2018 at 7:25 am #

    Hi Jason,
    I am really appreciated about this helpful tutorial.
    I train and then save a finalized cnn model in a script after that I load the finalized cnn model in another different script just used this command :(load_model(‘cnn_model.h5’) .
    In fact I have a test dataset which does not have any label and I wanna gain the proability of belonging of each sample to each class by this command : (model_cnn_final.predict_proba) but gave me this error:((AttributeError: ‘Model’ object has no attribute ‘predict_proba’) and also when I applied this command:[yhat=model.predict_classes(X) ] it gave me this error :(‘Model’ object has no attribute ‘predict_classes’).
    I have used the command :(yhat = model.predict(X)) and it worked fine.

    what is the problem with these commands which cuase error??
    how can I fixe the errors?

    • Avatar
      Jason Brownlee April 2, 2018 at 2:47 pm #

      I believe these methods are only supported on Sequential models, you may be using the functional Model API. In that case, you may be limited to the predict() function alone, which will return probabilities in the case of a softmax activation function in the output layer.

  8. Avatar
    Fredrik Nilsson April 4, 2018 at 6:27 am #

    Hi

    This is super good Jason!

    Thanks your writings 🙂

  9. Avatar
    Bastien April 19, 2018 at 2:24 am #

    Hi Jason,

    Thank you for this really helpful tutorial.

    I have a question. I think I am missing something to make predictions. I don’t understand what should be the input on model.predict(X) to predict new data. Let’s say I have one year of data (sampled every hour) and I want to predict the following week. What should my X be ?

  10. Avatar
    ata May 2, 2018 at 7:26 am #

    Hello, it is realy nice explanation, but I want to ask what verbose does ? why we assign it to 0 ?

    • Avatar
      Jason Brownlee May 3, 2018 at 6:27 am #

      Verbose gives output. We can turn off this output by setting it to 0.

  11. Avatar
    Francisco June 6, 2018 at 11:46 pm #

    Hi Jason, thank you for your tutorial. I’m very grateful for what I have learnt from you.

    I have a question. Let’s say I have my LSTM model and it is working properly with the train and test data, so the model is ready to be used in production. The data that I have has the following features: timestamp, price and capitalization. If I want to predict tomorrow’s price, must
    I provide timestamp and capitalization values? or only timestamp?

    • Avatar
      Jason Brownlee June 7, 2018 at 6:29 am #

      To make one prediction, it must be provided with one input sequence, as defined by your model during training.

  12. Avatar
    SM June 12, 2018 at 12:42 am #

    Hi Jason, all your blogposts are super insightful. Cannot wait to read more articles.

    Earlier, I had not preprocessed X and y correctly. Now I have used the “Stacked LSTM for sequence classification” referenced in keras homepage https://keras.io/getting-started/sequential-model-guide/#examples.

    This is how my result looks. https://github.com/sagar-m/character-classification/blob/master/SAP.ipynb Unfortunately, I do not have ytest.txt to verify my results. Please have a look and let me know if you have any comments.

    My number of classes to predict is 12, however the predicted output range in classes was 1 to 10. Not sure why and if it incorrect.

    Thanks a lot again.

  13. Avatar
    SM June 12, 2018 at 12:48 am #

    Hi Jason, one question I have : “validation schemes are supposed to be used to estimate quality of the model. When you found the right hyper-parameters and want to get test predictions don’t forget to retrain your model using all training data.”

    Once the model is trained with validation set, should I retrain the model using all training data?

    Thank you.

  14. Avatar
    Hamied June 19, 2018 at 11:17 pm #

    Hi Jason,
    I have a concern related RealTime validation if you have input test data (60 frame per ms ) and you would like to do prediction in Realtime . How you could ensure the prediction will be done. ?

    On the other hand, lets give an example that, we ‘re getting input test data ( 100 x 162 ) , time stamp x features . We are fitting all the information in one sample array (1,100,162,1), then you would like to do prediction for each time instance when you receive the data. The problem the streaming dataset is too fast to be catch by model:
    y_predict = model.predict_classes(test_input)
    I would like to know if you have any suggestions regarfing such a problem. In term of how we could make the prediction possible in realtime streaming data for each time the input change only. ?

    If you run it with this speed ( the prediction will be going on frem old dataset . Cant catch the new samples )

    Thanks in advance

    • Avatar
      Jason Brownlee June 20, 2018 at 6:27 am #

      Making predictions is very fast.

      Only training the model is slow, which is only done once before it is used.

  15. Avatar
    Maryam June 23, 2018 at 6:25 am #

    Hi Jason,
    Thank you for the awesome and also practical tutorial as ever been.
    I face a question as that is if I wanna use predict(x_dataset) function, is it necessary to padding “x_dataset” or not??
    I will be grateful if you answer the question.
    Best Regard
    Maryam

    • Avatar
      Jason Brownlee June 24, 2018 at 7:25 am #

      To make a prediction, the input data must be prepared in the same way as the training data, including lengths and transforms.

  16. Avatar
    Matteo August 9, 2018 at 10:07 pm #

    Hi.
    I have one question.
    I have a training set with the labels, let’s say that I play cheess and I have the historical matches with label of the winner [0 = player1, 1 = player 2]
    And I want to predict if after 10/15 moves I’ll have more probability to win or lose.
    How can I write the model that predict a number between 0 and 1 ( close to 0 means that I’ll win and close to 1 means i’ll lose )
    Thanks !

    • Avatar
      Jason Brownlee August 10, 2018 at 6:14 am #

      A good approach might be to use rating systems to estimate the skill of each player and feed this into a predictive model.

  17. Avatar
    Noe August 24, 2018 at 7:35 pm #

    Hello Jason, and many thanks for this awesome tutorial.

    My preoccupation is about using the trained and tested model to predict the future. This means values after the test set.

    Thank you.

  18. Avatar
    Alireza September 27, 2018 at 10:06 pm #

    Hi Jason,
    I tried to make predictions just for one Row input data. But I like to Know should my new data be scaled or not? if yes I tried with my scaled model but I got “0” for each feature!! which way is correct? please help me.

    Thank you.

    • Avatar
      Jason Brownlee September 28, 2018 at 6:15 am #

      Your input data must be prepared in the same way as the training data.

      If the training data was scaled, then new data must be scaled using the same coefficients.

  19. Avatar
    bedorlan October 11, 2018 at 1:48 am #

    Finally an easy to understand gist on how to implement an LSTM. Thank you!

  20. Avatar
    Saurabh Swaroop October 11, 2018 at 10:59 am #

    Hello Jason,

    I tried 6.7 code example from Long Short Term Memory Networks with Python. But its giving error.

    from random import randint
    from numpy import array
    from numpy import argmax
    from keras.models import Sequential
    from keras.layers import LSTM
    from keras.layers import Dense

    # generate a sequence of random numbers in [0, n_features)
    def generate_sequence(length, n_features):
    return [randint(0, n_features-1) for _ in range(length)]

    # one hot encode sequence
    def one_hot_encode(sequence, n_features):
    encoding = list()
    for value in sequence:
    vector = [0 for _ in range(n_features)]
    vector[value] = 1
    encoding.append(vector)
    return array(encoding)

    # decode a one hot encoded string
    def one_hot_decode(encoded_seq):
    return [argmax(vector) for vector in encoded_seq]

    # generate one example for an lstm
    def generate_example(length, n_features, out_index):
    # generate sequence
    sequence = generate_sequence(length, n_features)
    # one hot encode
    encoded = one_hot_encode(sequence, n_features)
    print(“Shape of encoded is:”, encoded.shape)
    # reshape sequence to be 3D
    X = encoded.reshape((1, length, n_features))
    # select output
    y = encoded[out_index].reshape(1, n_features)
    return X, y

    # define model
    length = 5
    n_features = 10
    out_index = 2
    model = Sequential()
    model.add(LSTM(25, input_shape=(length, n_features)))
    model.add(Dense(n_features, activation=’softmax’))
    model.compile(loss=’categorical_crossentropy’, optimizer=’adam’, metrics=[‘acc’])
    model.summary()

    # fit model
    for i in range(10000):
    X, y = generate_example(length, n_features, out_index)
    model.fit(X, y, epochs=1, verbose=2)

    Shape of encoded is: (1, 10)
    —————————————————————————
    ValueError Traceback (most recent call last)
    in ()
    1 # fit model
    2 for i in range(10000):
    —-> 3 X, y = generate_example(length, n_features, out_index)
    4 model.fit(X, y, epochs=1, verbose=2)

    in generate_example(length, n_features, out_index)
    7 print(“Shape of encoded is:”, encoded.shape)
    8 # reshape sequence to be 3D
    —-> 9 X = encoded.reshape((1, length, n_features))
    10 # select output
    11 y = encoded[out_index].reshape(1, n_features)

    ValueError: cannot reshape array of size 10 into shape (1,5,10)

    • Avatar
      Jason Brownlee October 11, 2018 at 4:13 pm #

      It suggest that the shape of your data does not match the expectation of your model.

      You can change the shape of the data or change the expectation of the model.

  21. Avatar
    Shooter November 1, 2018 at 2:13 pm #

    Hello Jason,
    Thanks for the great tutorial. I just wanted to know how can i calculate computational complexity of LSTM?

    Thanks in advance.

    • Avatar
      Jason Brownlee November 1, 2018 at 2:34 pm #

      Sorry, I don’t have material on calculating the computational complexity of neural networks.

  22. Avatar
    nandini January 10, 2019 at 10:07 pm #

    I would like to predict the category of sentence using classification ,but here i need to know while doing model predicting ,if a sentence is already given to the model for predictions even after first time i need to get you have already predicted this sentence.

    Please suggest on this issue,this is useful for chatbot application,if a question was already asked in conservation , again user had asked same question i need to get yo uhave already this question to previoustime.

    • Avatar
      Jason Brownlee January 11, 2019 at 7:46 am #

      Perhaps add an if-statement to look up the sentence in a hashtable before passing a sentence to the model for prediction.

      Sounds like engineering, not machine learning.

      • Avatar
        nandini January 11, 2019 at 6:18 pm #

        Thanks for your answer

  23. Avatar
    Tom January 20, 2019 at 12:58 pm #

    X=[x =21, x1=13, ‘grassMinTemp’=15]

    yhat = model.predict(X, verbose=0)
    print(yhat)

    would i do like this?

    • Avatar
      Jason Brownlee January 21, 2019 at 5:30 am #

      No, input is an array of numbers, just like training data, e.g. X = [21, 13, 15]

  24. Avatar
    Jessie January 20, 2019 at 1:20 pm #

    if multiple dependent variables in LSTM model , what should i do ?

    X = [x1=12,x2=1234]
    model =the path of the model
    yhat = model.predict(X)
    thx

  25. Avatar
    Jeeva T February 25, 2019 at 3:04 pm #

    your model show’s like this error how do i solve it

    Using TensorFlow backend.
    Traceback (most recent call last):
    File “C:\Python36\lib\site-packages\tensorflow\python\pywrap_tensorflow.py”, line 58, in
    from tensorflow.python.pywrap_tensorflow_internal import *
    File “C:\Python36\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py”, line 28, in
    _pywrap_tensorflow_internal = swig_import_helper()
    File “C:\Python36\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py”, line 24, in swig_import_helper
    _mod = imp.load_module(‘_pywrap_tensorflow_internal’, fp, pathname, description)
    File “C:\Python36\lib\imp.py”, line 242, in load_module
    return load_dynamic(name, filename, file)
    File “C:\Python36\lib\imp.py”, line 342, in load_dynamic
    return _load(spec)
    ImportError: DLL load failed: The specified module could not be found.

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last):
    File “D:/lstm_model.py”, line 1, in
    from keras.models import Sequential
    File “C:\Python36\lib\site-packages\keras\__init__.py”, line 3, in
    from . import utils
    File “C:\Python36\lib\site-packages\keras\utils\__init__.py”, line 6, in
    from . import conv_utils
    File “C:\Python36\lib\site-packages\keras\utils\conv_utils.py”, line 9, in
    from .. import backend as K
    File “C:\Python36\lib\site-packages\keras\backend\__init__.py”, line 89, in
    from .tensorflow_backend import *
    File “C:\Python36\lib\site-packages\keras\backend\tensorflow_backend.py”, line 5, in
    import tensorflow as tf
    File “C:\Python36\lib\site-packages\tensorflow\__init__.py”, line 24, in
    from tensorflow.python import pywrap_tensorflow # pylint: disable=unused-import
    File “C:\Python36\lib\site-packages\tensorflow\python\__init__.py”, line 49, in
    from tensorflow.python import pywrap_tensorflow
    File “C:\Python36\lib\site-packages\tensorflow\python\pywrap_tensorflow.py”, line 74, in
    raise ImportError(msg)
    ImportError: Traceback (most recent call last):
    File “C:\Python36\lib\site-packages\tensorflow\python\pywrap_tensorflow.py”, line 58, in
    from tensorflow.python.pywrap_tensorflow_internal import *
    File “C:\Python36\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py”, line 28, in
    _pywrap_tensorflow_internal = swig_import_helper()
    File “C:\Python36\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py”, line 24, in swig_import_helper
    _mod = imp.load_module(‘_pywrap_tensorflow_internal’, fp, pathname, description)
    File “C:\Python36\lib\imp.py”, line 242, in load_module
    return load_dynamic(name, filename, file)
    File “C:\Python36\lib\imp.py”, line 342, in load_dynamic
    return _load(spec)
    ImportError: DLL load failed: The specified module could not be found.

    Failed to load the native TensorFlow runtime.

    See https://www.tensorflow.org/install/errors

    for some common reasons and solutions. Include the entire stack trace
    above this error message when asking for help.

    • Avatar
      Jason Brownlee February 26, 2019 at 6:13 am #

      Sorry to hear that, perhaps tensorflow is not installed correctly?

      Perhaps try re-installing?
      Perhaps try theano instead?

  26. Avatar
    sai March 14, 2019 at 11:39 pm #

    I couldn’t found clarity about prediction in this model. My doubt is how can one predict for future values without having test data. suppose we have a univariate data set(2012 Jan-2015 Dec) how can we get values for (2016 Jan to 2016 Dec).plz help me.Thank you in advance

    • Avatar
      Jason Brownlee March 15, 2019 at 5:34 am #

      yhat = model.predict(X)

      • Avatar
        lala smith April 29, 2019 at 7:16 pm #

        what will be the X?

        • Avatar
          Jason Brownlee April 30, 2019 at 6:50 am #

          It will be whatever your model expects as input, e.g. an array of samples.

          • Avatar
            harish February 9, 2021 at 6:51 pm #

            hello Jason,
            same as the above question, I still have question that, what values i need to give for input x for prediction. Here uh mentioned ‘will be whatever your model expects as input, e.g. an array of samples’,
            i given the train data which is having length 2708 as input and it given same length as predicted values with length 2798.Here i want a confirmation that that predicted values are upcoming are what??
            how i need to assume it??

          • Avatar
            Jason Brownlee February 10, 2021 at 8:02 am #

            The inputs provided to the model depend on how you have prepared your data and defined your model.

            If you have trained your model to take 7 days of input and predict one day, then you need to provide the last 7 days of data to get the next day as output.

  27. Avatar
    SAEED April 10, 2019 at 5:07 pm #

    Hello Jason,
    suppose time series is 1,2,3,4,5,6,7,8…..
    If we want to predict next (e.g 9th, 10th, 11th…) data points in time series using LSTM.
    how to do that?
    model.predict(X) What would be X?

    • Avatar
      Jason Brownlee April 11, 2019 at 6:32 am #

      It depends on how you have framed your problem.

      If the model expects 3 inputs to predict 1 output and those 3 inputs are the 3 prior obs, then:

      Does that help?

      • Avatar
        Al October 2, 2019 at 10:04 pm #

        Hi Jason, sorry to middle in uninvited. I am on the same spot and to me it looks like the model indeed works as long as there is data in X without projecting any future predictions. In your example above a prediction over X = asarray([[[6],[7],[8]]]) will plot a curve following along those values, and not a projection of following ones. The model.predict(X) I look for is a prediction of values that do not exist yet and for which there is no line plot of true data. If today is October the 2nd and I want to predict the open stock price of the future 7 days (when there is no existing true value yet), how can this be done? populating X with the last x steps of the full dataset will only return a nice prediction over those last x steps which I already know… Thanks!

  28. Avatar
    ask April 21, 2019 at 6:05 am #

    Hi jason ,
    For exemple if i want to predict the value of a currency then my seq will be dates ?

  29. Avatar
    mbelahcen April 25, 2019 at 7:53 pm #

    Hello Jason!

    To reduce variance, I used the average of 3 predictions using 3 different models created with your “fit_lstm” function in another tutorial and I get better results.
    forecast1 = model1.predict(test_reshaped, batch_size=batch_size)
    forecast2 = model2.predict(test_reshaped, batch_size=batch_size)
    forecast3 = model3.predict(test_reshaped, batch_size=batch_size)

    However, can I save this into ONE model for future use?

  30. Avatar
    lala smith April 29, 2019 at 7:14 pm #

    hey jason i need to ask how to predict the next 1000 data from the model?

    • Avatar
      lala smith April 29, 2019 at 7:19 pm #

      what i meant is if i already save the model with 8000 window size and total of data is 12000 and when i want to use the model with 8000 data it has to have 8000 window size and how do i predict the 4000 if i can’t create the shape?
      def create_dataset(dataset, look_back = 1):
      data_X, data_Y = [], []
      for i in range(len(dataset) – look_back – 1):
      a = dataset[i:(i + look_back), 0]
      data_X.append(a)
      data_Y.append(dataset[i + look_back, 0])

      return(np.array(data_X), np.array(data_Y))

      can you please help me how can i predict the next 4000 cause i keep getting error on the shape.
      thank you

      • Avatar
        Jason Brownlee April 30, 2019 at 6:52 am #

        Sorry, I don’t follow.

        Perhaps you can elaborate?

    • Avatar
      Jason Brownlee April 30, 2019 at 6:50 am #

      You can develop a model to predict 1000 steps at once, or use a recursive strategy.

      I list some approaches here:
      https://machinelearningmastery.com/faq/single-faq/how-do-you-use-lstms-for-multi-step-time-series-forecasting

  31. Avatar
    mbelahcen May 24, 2019 at 12:34 am #

    Hello Jason,

    The model I developed performs very well on the test set. However for prediction its performance drops. Is it an overfitting problem? Could you go please tell me how to counter this problem?

  32. Avatar
    Zach June 20, 2019 at 11:47 pm #

    Hi Jason,

    I am trying to forecast into the future using my LSTM and my training data contains features that are unavailable for future X inputs (e.g. Stock price). What are some ways to deal with this issue and still be able to make predictions for the future?

    Thanks.

    • Avatar
      Jason Brownlee June 21, 2019 at 6:38 am #

      Frame the prediction problem with the data that you do have. E.g. how you intend to use the model should dictate how the problem is framed.

  33. Avatar
    Lopa June 27, 2019 at 12:14 am #

    Hi Jason,

    I have tried to implement this tutorial in a real life case & predicted 100 future time steps (in a univariate scenario).

    As a next step I have included some seasonal dummies & other predictors. In order to predict 100 future time steps I have used the following code:

    #future unknown predictions: in this case, test_set doesn’t exist

    future_pred_count = 100 #let’s predict 100 new steps

    model.reset_states() #always reset states when inputting a new sequence

    #first, let set the model’s states (it’s important for it to know the previous trends)
    predictions = model.predict(previous_inputs) #this creates states

    #future predictions
    future = []
    currentStep = predictions[:,-1:,:] #last step from the previous prediction

    for i in range(future_pred_count):
    currentStep = model.predict(currentStep) #get the next step
    future.append(currentStep) #store the future steps

    #after processing a sequence, reset the states for safety
    model.reset_states()

    But I am getting an error IndexError: too many indices for array. It would be great if you can help.

  34. Avatar
    Lopa June 27, 2019 at 12:25 am #

    Please ignore my last question

  35. Avatar
    Srijan Sah July 4, 2019 at 4:15 pm #

    Can you please help me in predicting the sales of future dates using the LSTM model trained in the given article. It would be very helpful.

    https://towardsdatascience.com/predicting-sales-611cb5a252de

    In this article, prediction has been done on the time period given in the dataset with all the features available but i want to predict it for future dates.

    Looking towards your reply.

  36. Avatar
    guddu August 19, 2019 at 5:55 pm #

    Hi Jason,

    If I have 3 files in my train data and 1 file in my testing data.
    and in the file, there is supervised data which has viscosity and temperature at time t and t-1 timestamp. Can I predict the temp and log viscosity on time for the test set also?

    My data is being made with respect to time and I have to predict at this time step this will be the viscosity and that will be the temperature. I am able to do it in one file by diving the data into 75 perc training and 25 perc in testing. But what If I want to train my model on 3 files like this and test on 1 file. Is it possible with time-series data`?

    • Avatar
      Jason Brownlee August 20, 2019 at 6:23 am #

      I believe so, perhaps experiment and see what is viable?

  37. Avatar
    Rohith September 4, 2019 at 7:21 pm #

    Hello,
    How do i split time series data set to predict for next 7 days

  38. Avatar
    Lopa October 7, 2019 at 5:30 am #

    Hi Jason,

    An LSTM model gives different output for different runs unless you assign a seed. In that case how should one decide on which model to be used (basically which weights) ?

    • Avatar
      Jason Brownlee October 7, 2019 at 8:32 am #

      Correct.

      Use the average performance of the model over multiple training runs.

  39. Avatar
    Lopa October 7, 2019 at 8:36 am #

    Alright the way you have demonstrated in one of the articles. Thanks Jason.

  40. Avatar
    vee9587 October 9, 2019 at 3:37 pm #

    hi Jason,
    this is the code i used to make a prediction out of my saved lstm model.
    the dataset is one row of inputs with the header and index column which is:
    0 0 0 0 0 0 0 0 0 26.1 5.201
    i want to predict the last column upto 2 time steps. (t and t+1) i wrote the lstm model code accordingly.

    prediction code:

    dataset = read_csv(‘predict.csv’, header=0, index_col=0)
    dataset.columns = [‘Ambewela’, ‘Annfield’, ‘Campion’,’Helboda’,’Holmwood’,’Hatton Police Station’, ‘Labukelle’,’Sandringham’,’Watawala’, ‘El – Nino’, ‘Inflow’]
    dataset.index.name = ‘Date’

    print(“#################test1###############”)
    print(dataset.head())

    values = dataset.values
    groups = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]

    # normalize features
    scaler = MinMaxScaler(feature_range=(0, 1))
    scaled = scaler.fit_transform(values)
    print(“#################test6###############”)
    print(scaled)

    days=1
    test_X = scaled[:days, :]
    print(“#################test9###############”)
    print(test_X)

    test_X = test_X.reshape((test_X.shape[0], 1, test_X.shape[1]))

    # load model from single file
    model = load_model(‘modelTest.h5’)
    yhat = model.predict(test_X)
    print(“#################test10###############”)
    print(yhat)

    #test_X = test_X.reshape((test_X.shape[0], test_X.shape[2]))
    inv_yhat = concatenate((yhat, test_X), axis=1)
    inv_yhat = scaler.inverse_transform(inv_yhat)
    inv_yhat = inv_yhat[:,0]
    print(“#################test11###############”)
    print(inv_yhat)

    however i noticed that the 0,1 min max scaler values of the dataset is all zeros.

    this is the output i got:
    [1 rows x 11 columns]
    #################test6###############
    [[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]]
    #################test9###############
    [[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]]
    #################test10###############
    [[0.24231862]]
    Traceback (most recent call last):

    File “”, line 1, in
    runfile(‘F:/documents/Final Year Project/2 python test code/load model.py’, wdir=’F:/documents/Final Year Project/2 python test code’)

    File “F:\software\anaconda 3\lib\site-packages\spyder_kernels\customize\spydercustomize.py”, line 786, in runfile
    execfile(filename, namespace)

    File “F:\software\anaconda 3\lib\site-packages\spyder_kernels\customize\spydercustomize.py”, line 110, in execfile
    exec(compile(f.read(), filename, ‘exec’), namespace)

    File “F:/documents/Final Year Project/2 python test code/load model.py”, line 60, in
    inv_yhat = concatenate((yhat, test_X), axis=1)

    ValueError: all the input arrays must have same number of dimensions

    i wanna know whats wrong with the minmax scaler and why i am only getting one output?

  41. Avatar
    Gadelhag October 17, 2019 at 1:28 am #

    Hi Dr. Jason

    Thank you for the tutorial, it is really interested. I am just wondering is it possible to have the output of the prediction using LSTM not as a crisp value? In other words, if I have to predict 5 different classes, I want to have the results as degree of belonging such as 0.6 for class one, 0.2 for class two and so on.

    Thanks in advance

  42. Avatar
    usama adam February 24, 2020 at 8:49 pm #

    hi Dr.. sorry i am very new

    where did you define ythat exactly i mean i have the model i want to predict with one sentence
    my model is seq2seq model

    • Avatar
      Jason Brownlee February 25, 2020 at 7:45 am #

      yhat is the out put of the model when you call predict()

      The output is whatever you trained your model to do.

  43. Avatar
    usama adam March 4, 2020 at 12:01 am #

    hi Dr. jason
    i am still searching though the internet and trying to find the solution but i can not find it

    i have a machine translation seq2seq build with keras and now i want to make a prediction
    but i am getting a weird error and i do not know how and exactly what is happening with the shape of my model .

    code:

    model = load_model(‘model.h5’)

    single_x_test = [‘how are you’]

    model.predict(np.array(single_x_test, ndmin=0))

    error:
    model = load_model(‘model.h5’)

    single_x_test = [‘how are you’]

    model.predict(np.array(single_x_test, ndmin=0))

    • Avatar
      Jason Brownlee March 4, 2020 at 5:56 am #

      I don’t see an error. Perhaps try posting your code to stackoverflow.

  44. Avatar
    Laiba March 26, 2020 at 9:21 pm #

    Hi Jason,
    I appreciate your work. Its quite helpful. I have trained and tested a CNN-LSTM model but I am having problem in using the saved .h5 model file for real time predictions with the help of webcam. If you could please guide about that.
    Regards

  45. Avatar
    Abdel April 10, 2020 at 6:53 pm #

    Hey Jason, your tutorials are very impressive!!
    i have a simple question, do you have any example on Predicting the future on a timeseries problem but beyond the dataset (i mean generating new dates that we dont have in the dataset)
    Thanks again for all your explanations!!

    • Avatar
      Jason Brownlee April 11, 2020 at 6:13 am #

      The above tutorial explores this exactly.

      • Avatar
        Michael Nguyen April 15, 2020 at 9:52 pm #

        Hi Jason,

        1, You used time_steps = 1 for above example. Now, i have new data point (for example
        0.5) i have to reshape to [1,1,1] and predict. Is that correct?

        2, One more question: above example, you don’t tranform any data. I mean with some train
        data like stock prices, we have to scale data such Minmax (rang 0 to 1) or Standard. Now
        we have present price (ex: 118,12 not in range(0,1)). How can we predict tomorow price?

        Thanks !

        • Avatar
          Jason Brownlee April 16, 2020 at 6:00 am #

          No, the input shape should match the expectation of the model.

          Yes, it can be a good idea to scale data prior to modeling, and any scaling applied to the training dataset should be applied to new data.

          • Avatar
            Michael Nguyen April 16, 2020 at 11:59 am #

            Yes, but with time step equal 1 mean that we predict next day, and sample has shape [1, 1, 1]. So how can we scale data with only one point?

          • Avatar
            Jason Brownlee April 16, 2020 at 1:23 pm #

            I don’t understand, sorry. Perhaps you can rephrase your question?

          • Avatar
            Michael Nguyen April 23, 2020 at 8:48 pm #

            Sorry Jason,

            I mean If the model expects 1 input to predict 1 output, so:
            1. Is it impossible?
            2. For example X_train = [ [6], [7], [8] ] we want result Predict Y_train = [ [7], [8], [9] ], but at step of processing data, we scale data X_train = scaler.tranform(X_train). Now, we have new data a = [9], how can we predict?

            Thanks!

          • Avatar
            Jason Brownlee April 24, 2020 at 5:41 am #

            If the model expects 3 inputs, then during training and during inference, you must provide 3 inputs.

          • Avatar
            Michael Nguyen April 24, 2020 at 12:30 pm #

            No, i mean X_train has shape [n,1,1]. I scale X_train by standard tranform and want to predict output with [n,1,1] shape. Now with new data [1,1,1], how to predict?

          • Avatar
            Jason Brownlee April 24, 2020 at 1:22 pm #

            Call it directly, e.g.

  46. Avatar
    jay April 17, 2020 at 3:03 pm #

    PLease how do I use the model to forecast future dates

  47. Avatar
    Daniel S. April 17, 2020 at 7:58 pm #

    Hey Jason,

    very nice tutorial and explanation.
    I’m using a multivariate LSTM approach with multiple LSTM layers including selfattention for time series data. I have 60 time stamps, 12 variables and about 30k samples. I’m using the data to predict the gain for time stamp 65. The gain is defined as the percentage difference between the closing value at time stamp 60 and the closing value at time stamp 65. This is defined as the target variable. In this case LSTM is not really used to predict the future rather then a specific target variable. Therefore, I think I don’t use the full potential of LSTM.
    Lets say I would like to tell the model that 11 of these variables influence lets say the closing value and then use LSTM to predict the closing value for time stamp 65. How can I implement that? In short, what would be the synthax to tell the model that 11 specific variables influence the closing value and based on that, what would be the prediction of the closing value for time stamp 65?

    Thanks in advance,

    Daniel

  48. Avatar
    Rajrudra April 22, 2020 at 1:52 am #

    Good

  49. Avatar
    Onur May 3, 2020 at 10:39 pm #

    Hi Jason ,

    Thanks for great your post. How can I see the future when the true outcome for the sequence is currently unknown.I couldn’t find anything about this question .

    For example ; I have a dataset between on 2018-2019 but ı want to see the predicts on 2020.

  50. Avatar
    Mezgebe abebe May 25, 2020 at 2:10 pm #

    Thnks for your post, but how to trian the model with GUI?

  51. Avatar
    Ani May 27, 2020 at 1:10 pm #

    Hi,I have a dataset.
    At time 0- 3 rows 4 columns
    At time 1-3 rows 4 columns
    I want to predict
    At time 2
    The value of last row,last column element

  52. Avatar
    Johnny Liu June 19, 2020 at 12:13 pm #

    Hi Jason,
    Thanks for your post. Your teaching of deep learning is really useful!

    I am implementing a RNN model for predicting the output based on the previous output (1 time step) and 3 input.

    In your post “Multivariate Time Series Forecasting with LSTMs in Keras”, you taught us to use the previous output as one of the input. However, our output “variable 1” is blank at the beginning. We would not know the output before the prediction.

    In this case, how can we prepare the input and output for prediction?
    For example, var1 is output, var2, var3 and var4 are input.
    var1[t-1], var2[t-1], var3[t-1] and var4[t-1] are used for predicting var1[t]
    However, I have only the initial value of var1 which is 8*10^6.

    How can I update the value of var1[t-1] with the predicted output during the prediction?
    Without update, all the rows of var1 will be NAN (not a number) except the first row with initial value. My model can predict nothing with NAN.

    For scaling (0~1), it is also a problem to have only the initial value.

    My RNN_save_model.py is simply like your code in “Multivariate Time Series Forecasting with LSTMs in Keras”. There is no problem for training and testing as var1 (output) is already known in the training data.

    Thanks for your time!

    • Avatar
      Jason Brownlee June 19, 2020 at 1:14 pm #

      The first row must be removed as their is no prior observation.

      • Avatar
        Johnny Liu June 19, 2020 at 1:26 pm #

        Thanks for response.
        My question may be not clear enough.

        I am confused that how can I feedback output to input using RNN.

        I need var1[t-1], var2[t-1], var3[t-1] and var4[t-1] to predict var1[t]
        However, var1[t-1] is not known until var1[t-1] is predicted using var1[t-2].

        I only have the initial value var1[0]. var1[1], var1[2], var[3], ……. are unknown until the prediction is done.

        var2, var3 and var4 are already known.

        My goal is to predict var1 using LSTM.

        • Avatar
          Jason Brownlee June 20, 2020 at 6:04 am #

          You can do it manually one sample at a time. E.g. predict then construct the next sample, then predict. This is called a recursive model.

          • Avatar
            Johnny Liu June 20, 2020 at 2:37 pm #

            Thanks for response!

            If I am going to use a recursive model, can I keep the same method of training and testing before saving the model?

            Should I train and test the model using another method?

            I am using the same method to train and test my model like this post: https://machinelearningmastery.com/multivariate-time-series-forecasting-lstms-keras/#comment-540267

            You can assume that I am using the same code in the above link to make prediction using the save and load method in this post.

          • Avatar
            Jason Brownlee June 21, 2020 at 6:18 am #

            I would recommend evaluating the model in an identical manner in which you intend to you use.

          • Avatar
            Johnny Liu June 23, 2020 at 6:21 pm #

            Hi Jason, thanks for your suggestion

            I have tried to make prediction with LSTM model and feedback the prior predicted output to current input.

            However, I cannot invert scaling for forecast as I have only the initial value (8*10^6) for the output. I used it to predict the next output and use the next output to predict the further output. There is other known input data like temperature and humidity.

            Let’s call the output “stiffness”. stiffness(t) is the output and stiffness(t-1) is one of the input to predict stiffness(t).

            The initial value of stiffness is scale to 0 in the scaling between 0 and 1 as only the first sample exists among 10000 samples at the beginning.

            At a result, after the prediction and invert scaling, it always give me a horizontal line of 8*10^6.

            How can I turn the scale between 0 and 1 back to the original scaling in this case?

          • Avatar
            Jason Brownlee June 24, 2020 at 6:25 am #

            One thought is run the transform manually to understand what is going on and ensure you are applying it on the data in the same form both times.

  53. Avatar
    Gopal Netrakanti August 9, 2020 at 3:46 am #

    Hi Mr.Brownlee,

    Thank you for the excellent article. I have tried following the steps mentioned here and a few others which i have come across in other places and built a model but I have encountered some issues which i have mentioned in the following question on stack overflow:

    https://stackoverflow.com/questions/63318474/training-output-drops-to-0-after-fixed-timesteps-
    and-again-retrains-in-lstm-mode

    I’d appreciate it if you could have a look and help me out.

    Thanks and Regards,
    Gopal.

  54. Avatar
    Konrad December 14, 2020 at 4:48 am #

    Hi Jason! Awesome tutorial. I have a question, how can I try to reuse KERAS if my training data set is in this format :

    [
    [2, 5, 6, 15, 22],
    [12, 8, 2, 33, 44],

    ]

    in general 5 numbers, 1-49 each of them.

  55. Avatar
    harish February 9, 2021 at 6:38 pm #

    how can i forecast for next upcoming 20 days by using lstm,can u explain it briefly?

  56. Avatar
    farzaneh February 11, 2021 at 6:26 am #

    I have a question
    LSTM will be training with train and test data, how I can change the code to forecast value for the next 10 step

    • Avatar
      Jason Brownlee February 11, 2021 at 7:52 am #

      See the above tutorial on exactly this topic of how to make predictions.

  57. Avatar
    Deepak Verma March 10, 2021 at 4:37 pm #

    Hey,

    I finalise my Two Layer LSTM Model for regression problem but now when I’m making prediction on new inputs model is giving same output values for many inputs. How to rectify it.

    Other doubt is for same input model is giving different outputs at different time.

    • Avatar
      Jason Brownlee March 11, 2021 at 5:09 am #

      Sorry, I don’t understand what the problem could be with your model, perhaps you could elaborate on your problem?

      Yes, the LSTM has internal state and can give different output depending on the internal state. You must be careful to manage/reset the state at the end of sequences or when appropriate.

  58. Avatar
    Milan April 29, 2021 at 1:53 pm #

    Hello could u please tell me how to give a single vector and get the predictions from lstm network mentioned above

  59. Avatar
    Choco May 13, 2021 at 4:25 am #

    Hello, Mr. Brownlee
    I have a question that I can’t find an answer to. If I trained my model with LSTM layers to predict the next value based on, say, 20 previous, will it give me “wrong” results, if I feed the trained model an input that is of the size 7 (so it’s less than it has to be). I tried it and got this:

    WARNING:tensorflow:Model was constructed with shape (None, 20, 1) for input Tensor(“lstm_input:0”, shape=(None, 20, 1), dtype=float32), but it was called on an input with incompatible shape (None, 7, 1)

    So it gave me output, but does my LSTM predicts like it is “supposed to” on those 7 values or maybe there appears some kind of noise to compensate the lack of data? I don’t know if my question makes sence, but I really hope you could answer it. I’m trying to predict the next notes in music piece and it would be great if could input less notes and still get relevant results.

    • Avatar
      Jason Brownlee May 13, 2021 at 6:07 am #

      Yes, you must design your model to take the data size/shape that you will have available at the time a prediction is required.

  60. Avatar
    Choco May 13, 2021 at 7:21 am #

    Thank you for your reply!

  61. Avatar
    Echo Echo July 17, 2022 at 6:44 pm #

    Hello Jason,

    For a seq2seq model (https://machinelearningmastery.com/develop-encoder-decoder-model-sequence-sequence-prediction-keras/), after an inference model created, I notice we’d self-define a predict_sequence function for doing prediction.

    Why can’t we use Keras built-in functions, similar to the predict_classes() used here?

    • Avatar
      James Carmichael July 18, 2022 at 8:29 am #

      Hi Echo…This was done for demonstration purposes. I would encourage you to also investigate using Tensorflow/Keras functions as well.

  62. Avatar
    Ron November 18, 2022 at 9:05 am #

    Hi Jason,

    Thanks for posting this interesting article.

    I used a sliding window to turn my sequence into a supervised learning problem. I input the last 5 minutes of observations and my model predicts the next minute. How many forecasts can I make before I need to retrain my model with new data?

  63. Avatar
    sobhan June 20, 2023 at 9:55 pm #

    Hi Mr. Brownlee. Thank you for the tutorials.
    I load a LSTM model which I trained and saved it before:

    model = tf.keras.models.load_model(‘model_x.h5’, compile=False)

    and I try to predict a dataset:

    y = model.predict(X)

    input is:

    X = [x1(t-n), x2(t-n), y(t-n)
    …
    x1(t-1), x2(t-1), y(t-1)]

    and output (which I want to predict) is:

    y = [y(t-n+1)
    …
    y(t)]

    if real output is y and predicted one is yp, which statement is true:

    a) Network uses first row of X (x1(t-n), x2(t-n), y(t-n)) to predict output (yp(t-n+1)) and then uses second row of X (x1(t-n+1), x2(t-n+1), y(t-n+1)) to predict yp(t-n+2) and so on.

    b) Network uses first row of X (x1(t-n), x2(t-n), y(t-n)) to predict output (yp(t-n+1)) and then uses this output (yp(t-n+1)) to predict yp(t-n+2) and so on.

Leave a Reply