How to use Different Batch Sizes when Training and Predicting with LSTMs

Keras uses fast symbolic mathematical libraries as a backend, such as TensorFlow and Theano.

A downside of using these libraries is that the shape and size of your data must be defined once up front and held constant regardless of whether you are training your network or making predictions.

On sequence prediction problems, it may be desirable to use a large batch size when training the network and a batch size of 1 when making predictions in order to predict the next step in the sequence.

In this tutorial, you will discover how you can address this problem and even use different batch sizes during training and predicting.

After completing this tutorial, you will know:

  • How to design a simple sequence prediction problem and develop an LSTM to learn it.
  • How to vary an LSTM configuration for online and batch-based learning and predicting.
  • How to vary the batch size used for training from that used for predicting.

Let’s get started.

How to use Different Batch Sizes for Training and Predicting in Python with Keras

How to use Different Batch Sizes for Training and Predicting in Python with Keras
Photo by steveandtwyla, some rights reserved.

Tutorial Overview

This tutorial is divided into 6 parts, as follows:

  1. On Batch Size
  2. Sequence Prediction Problem Description
  3. LSTM Model and Varied Batch Size
  4. Solution 1: Online Learning (Batch Size = 1)
  5. Solution 2: Batch Forecasting (Batch Size = N)
  6. Solution 3: Copy Weights

Tutorial Environment

A Python 2 or 3 environment is assumed to be installed and working.

This includes SciPy with NumPy and Pandas. Keras version 2.0 or higher must be installed with either the TensorFlow or Keras backend.

For help setting up your Python environment, see the post:

Need help with LSTMs for Sequence Prediction?

Take my free 7-day email course and discover 6 different LSTM architectures (with sample code).

Click to sign-up and also get a free PDF Ebook version of the course.

Start Your FREE Mini-Course Now!

On Batch Size

A benefit of using Keras is that it is built on top of symbolic mathematical libraries such as TensorFlow and Theano for fast and efficient computation. This is needed with large neural networks.

A downside of using these efficient libraries is that you must define the scope of your data upfront and for all time. Specifically, the batch size.

The batch size limits the number of samples to be shown to the network before a weight update can be performed. This same limitation is then imposed when making predictions with the fit model.

Specifically, the batch size used when fitting your model controls how many predictions you must make at a time.

This is often not a problem when you want to make the same number predictions at a time as the batch size used during training.

This does become a problem when you wish to make fewer predictions than the batch size. For example, you may get the best results with a large batch size, but are required to make predictions for one observation at a time on something like a time series or sequence problem.

This is why it may be desirable to have a different batch size when fitting the network to training data than when making predictions on test data or new input data.

In this tutorial, we will explore different ways to solve this problem.

Sequence Prediction Problem Description

We will use a simple sequence prediction problem as the context to demonstrate solutions to varying the batch size between training and prediction.

A sequence prediction problem makes a good case for a varied batch size as you may want to have a batch size equal to the training dataset size (batch learning) during training and a batch size of 1 when making predictions for one-step outputs.

The sequence prediction problem involves learning to predict the next step in the following 10-step sequence:

We can create this sequence in Python as follows:

Running the example prints our sequence:

We must convert the sequence to a supervised learning problem. That means when 0.0 is shown as an input pattern, the network must learn to predict the next step as 0.1.

We can do this in Python using the Pandas shift() function as follows:

Running the example shows all input and output pairs.

We will be using a recurrent neural network called a long short-term memory network to learn the sequence. As such, we must transform the input patterns from a 2D array (1 column with 9 rows) to a 3D array comprised of [rows, timesteps, columns] where timesteps is 1 because we only have one timestep per observation on each row.

We can do this using the NumPy function reshape() as follows:

Running the example creates X and y arrays ready for use with an LSTM and prints their shape.

LSTM Model and Varied Batch Size

In this section, we will design an LSTM network for the problem.

The training batch size will cover the entire training dataset (batch learning) and predictions will be made one at a time (one-step prediction). We will show that although the model learns the problem, that one-step predictions result in an error.

We will use an LSTM network fit for 1000 epochs.

The weights will be updated at the end of each training epoch (batch learning) meaning that the batch size will be equal to the number of training observations (9).

For these experiments, we will require fine-grained control over when the internal state of the LSTM is updated. Normally LSTM state is cleared at the end of each batch in Keras, but we can control it by making the LSTM stateful and calling model.reset_state() to manage this state manually. This will be needed in later sections.

The network has one input, a hidden layer with 10 units, and an output layer with 1 unit. The default tanh activation functions are used in the LSTM units and a linear activation function in the output layer.

A mean squared error optimization function is used for this regression problem with the efficient ADAM optimization algorithm.

The example below configures and creates the network.

We will fit the network to all of the examples each epoch and reset the state of the network at the end of each epoch manually.

Finally, we will forecast each step in the sequence one at a time.

This requires a batch size of 1, that is different to the batch size of 9 used to fit the network, and will result in an error when the example is run.

Below is the complete code example.

Running the example fits the model fine and results in an error when making a prediction.

The error reported is as follows:

Solution 1: Online Learning (Batch Size = 1)

One solution to this problem is to fit the model using online learning.

This is where the batch size is set to a value of 1 and the network weights are updated after each training example.

This can have the effect of faster learning, but also adds instability to the learning process as the weights widely vary with each batch.

Nevertheless, this will allow us to make one-step forecasts on the problem. The only change required is setting n_batch to 1 as follows:

The complete code listing is provided below.

Running the example prints the 9 expected outcomes and the correct predictions.

Solution 2: Batch Forecasting (Batch Size = N)

Another solution is to make all predictions at once in a batch.

This would mean that we could be very limited in the way the model is used.

We would have to use all predictions made at once, or only keep the first prediction and discard the rest.

We can adapt the example for batch forecasting by predicting with a batch size equal to the training batch size, then enumerating the batch of predictions, as follows:

The complete example is listed below.

Running the example prints the expected and correct predicted values.

Solution 3: Copy Weights

A better solution is to use different batch sizes for training and predicting.

The way to do this is to copy the weights from the fit network and to create a new network with the pre-trained weights.

We can do this easily enough using the get_weights() and set_weights() functions in the Keras API, as follows:

This creates a new model that is compiled with a batch size of 1. We can then use this new model to make one-step predictions:

The complete example is listed below.

Running the example prints the expected, and again correctly predicted, values.

Summary

In this tutorial, you discovered how you can work around the need to vary the batch size used for training and prediction with the same network.

Specifically, you learned:

  • How to design a simple sequence prediction problem and develop an LSTM to learn it.
  • How to vary an LSTM configuration for online and batch-based learning and predicting.
  • How to vary the batch size used for training from that used for predicting.

Do you have any questions about batch size?
Ask your questions in the comments below and I will do my best to answer.

Develop LSTMs for Sequence Prediction Today!

Long Short-Term Memory Networks with Python

Develop Your Own LSTM models in Minutes

…with just a few lines of python code

Discover how in my new Ebook:
Long Short-Term Memory Networks with Python

It provides self-study tutorials on topics like:
CNN LSTMs, Encoder-Decoder LSTMs, generative models, data preparation, making predictions and much more…

Finally Bring LSTM Recurrent Neural Networks to
Your Sequence Predictions Projects

Skip the Academics. Just Results.

Click to learn more.


32 Responses to How to use Different Batch Sizes when Training and Predicting with LSTMs

  1. Sam Taha May 16, 2017 at 5:06 am #

    Good tip. It is also useful to create another model just for evaluation of test dataset to compare RMSE between train/test.

  2. Kailash Ahirwar May 17, 2017 at 5:08 am #

    Could you explain the dimensions of the weight matrix for this model? Just curious and want to know. I am trying to understand how Keras stores weights.

    • Jason Brownlee May 17, 2017 at 8:43 am #

      You can print it out after compiling the model as follows:

  3. Logan May 19, 2017 at 3:12 am #

    Hello, could you explain why you redefine the n_batch = 1 to 1? I thought it should be a different value, no?

    • Jason Brownlee May 19, 2017 at 8:19 am #

      In which case Logan?

      • Chris May 24, 2017 at 7:35 pm #

        I think he means line 18 and 31 in the last complete example. Line 18 should be the following or ? n_batch=len(X)

  4. Jason Ho May 19, 2017 at 11:39 am #

    Hi,Dr.Jason Brownlee .Could you tell me the keras version you use in this example,I try to copy the code to run in my mac,but it doesn’t work.

  5. Zhiyu Wang May 27, 2017 at 3:03 pm #

    Hi, Jason Brownlee. Could you explain why you define the n_batch=1 in the line 18 of the last example? I think n_batch should be assigned with other values.
    I have tried to redefine n_batch=len(X), train the model, and copy weights to the new model “new_model”. But I did get the right prediction result. Could you please help to find the reason?

    >Expected=0.0, Predicted=0.0
    >Expected=0.1, Predicted=0.1
    >Expected=0.2, Predicted=0.3
    >Expected=0.3, Predicted=0.5
    >Expected=0.4, Predicted=0.8
    >Expected=0.5, Predicted=1.1
    >Expected=0.6, Predicted=1.4
    >Expected=0.7, Predicted=1.7
    >Expected=0.8, Predicted=2.1

    The following is the code I used, which is same as the last example except the line 18

    from pandas import DataFrame
    from pandas import concat
    from keras.models import Sequential
    from keras.layers import Dense
    from keras.layers import LSTM
    # create sequence
    length = 10
    sequence = [i/float(length) for i in range(length)]
    # create X/y pairs
    df = DataFrame(sequence)
    df = concat([df, df.shift(1)], axis=1)
    df.dropna(inplace=True)
    # convert to LSTM friendly format
    values = df.values
    X, y = values[:, 0], values[:, 1]
    X = X.reshape(len(X), 1, 1)
    # configure network
    n_batch = len(X)
    n_epoch = 1000
    n_neurons = 10
    # design network
    model = Sequential()
    model.add(LSTM(n_neurons, batch_input_shape=(n_batch, X.shape[1], X.shape[2]), stateful=True))
    model.add(Dense(1))
    model.compile(loss=’mean_squared_error’, optimizer=’adam’)
    # fit network
    for i in range(n_epoch):
    model.fit(X, y, epochs=1, batch_size=n_batch, verbose=1, shuffle=False)
    model.reset_states()
    # re-define the batch size
    n_batch = 1
    # re-define model
    new_model = Sequential()
    new_model.add(LSTM(n_neurons, batch_input_shape=(n_batch, X.shape[1], X.shape[2]), stateful=True))
    new_model.add(Dense(1))
    # copy weights
    old_weights = model.get_weights()
    new_model.set_weights(old_weights)
    # compile model
    new_model.compile(loss=’mean_squared_error’, optimizer=’adam’)
    # online forecast
    for i in range(len(X)):
    testX, testy = X[i], y[i]
    testX = testX.reshape(1, 1, 1)
    yhat = new_model.predict(testX, batch_size=n_batch)
    print(‘>Expected=%.1f, Predicted=%.1f’ % (testy, yhat))

  6. Kim Miller July 26, 2017 at 11:18 am #

    In testing I found that while a batch size of 1 looked like it was learning a pattern: http://tinyurl.com/ycutvy7h, a larger batch size often converges to a static prediction, much like linear regression (batch Size 5): http://tinyurl.com/yblzfus4

    That said, the latter example (batch size 5) actually has a lower RMSE. It leaves me wondering if I actually have something learnable here, or if the flat line indicates no pattern beyond a regression?

    • Jason Brownlee July 26, 2017 at 4:00 pm #

      Interesting.

      Some problems may not benefit from a complex model like an LSTM. That is why we baseline using simple methods – to see if we can add value.

      More complex/flexible is not always better.

      • Kim Miller July 29, 2017 at 6:39 am #

        Technically my problem might be a classification problem in that I really want to know, “Will tomorrow’s move be up or down?” Yet it’s not in the sense that magnitude matters. e.g. the following examples all have the same reward: a) correctly predicting an “up” tomorrow where truth was +6, b) predicting an “up” on 3 days where truth was +2, c) predicting “down” on two days that truth was -3.

        Thoughts on the best model for that kind of problem? Considering reinforcement learning next, e.g. https://github.com/matthiasplappert/keras-rl.

        • Jason Brownlee July 29, 2017 at 8:14 am #

          Brainstorm all possible framings, then evaluate each.

          • Kim Miller July 30, 2017 at 5:07 am #

            I’ve seen extremely few examples of practitioners taking the DQN RL concept beyond Gym gaming examples. Do you see any reason why time series forecasting could not be looked at like an Atari game, where the observed game state is replaced with our time series observations and we ask the agent to forecast (play one of two paddle positions) for tomorrow being “up” or “down” as described above? Does that sound like an incremental advance beyond what we’re doing in your more regression-oriented approach taken in this post?

          • Jason Brownlee July 30, 2017 at 7:51 am #

            You could try it, I have not thought deeply about the suitability of DRL on time series sorry.

  7. Saad August 3, 2017 at 6:16 pm #

    Very good post thank you Jason! I had this problem yesterday and your blog helped me solve it.

    I think you should reset n_batch to a different value than 1 in the third solution as brought up by @Zhiyu Wang Because you redefine to 1 later in the code (line 31) so you didn’t end up having different batch size between training and predicting.

    • Jason Brownlee August 4, 2017 at 6:55 am #

      Thanks Saad, I see. I’ve updated the example to *actually* use different batch sizes in the final example!

  8. Dimitar August 24, 2017 at 1:32 am #

    Hello Dr. Brownlee, as someone that has recently started with Machine Learning I would like to thank you for all the great content. Your blog is extremely helpful.

    As for the batches with different sizes – instead of providing “batch_input_shape” can’t we provide “input_shape” and then use “model.train_on_batch” and manually slice the inputs for each training step? We will also have to remove “stateful=True” but since the state is reset on each batch I believe it will still work the same. Something like this:

    model.add(LSTM(n_neurons, input_shape=(X.shape[1], X.shape[2]))

    for i in range(n_epoch):
    batch_start = 0
    for j in range(1, len(X) + 1):
    if j % n_batch == 0:
    model.train_on_batch(X[batch_start:j], y[batch_start:j])
    batch_start = j

    • Jason Brownlee August 24, 2017 at 6:45 am #

      I guess I was focused on showing how to do this when the LSTM is stateful.

      The risk is that you will cut down on sequence length, and impact BPTT.

      With everything, test and see how it fairs on your problem.

  9. Jim Goodwin August 31, 2017 at 6:58 am #

    Hi Jason,

    I was having trouble with model.predict() in my stateful LSTM, and I finally got it to work thanks to what I learned from this page, thank you!

    However I’m still confused by one thing.

    In the intro at the top of the page, it says that when using Keras, “you must define the scope of your data upfront and for all time. Specifically, the batch size.”

    That seems to be true for stateful LSTM’s, not true for stateless LSTM’s, and I dunno about other RNN’s or the rest of Keras. Perhaps you could clarify.

    The reason I doubt it for stateless LSTM’s is that the example at

    https://github.com/fchollet/keras/blob/master/examples/lstm_text_generation.py

    works fine. The gist of it:

    model = Sequential()
    model.add(LSTM(128, input_shape=(maxlen, len(chars))))
    model.add(Dense(len(chars)))
    model.add(Activation(‘softmax’))

    model.fit(X, y, batch_size=128, epochs=1)

    x = np.zeros((1, maxlen, len(chars)))
    for t, char in enumerate(sentence):
    x[0, t, char_indices[char]] = 1.

    preds = model.predict(x, verbose=0)[0]

    So it specifies nothing about batch size when constructing the model; it trains it with an explicit batch size argument of 128; and it calls predict() without any batch size argument on a dataset whose batch size is 1. It seems like the model has not bound the batch size, and adapts dynamically to whatever data you give it.

    But I spent a lot of time trying to modify that example to work with a stateful LSTM, and failed. Evidently you are correct that for stateful LSTM’s, one cannot do that. One has to specify the batch size explicitly to add a stateful LSTM layer to the model, and after that the model is rigidly bound to that size and can neither train nor predict on data of any other batch size.

    Correct me if I’m wrong, I’m new at this.
    Thanks!

  10. satish September 11, 2017 at 11:14 pm #

    Hi Jason,

    Thanks for the great tutorial.

    The batch size limits the number of samples to be shown to the network before a weight update can be performed.

    “””””””””This same limitation is then imposed when making predictions with the fit model.

    Specifically, the batch size used when fitting your model controls how many predictions you must make at a time.”””””””

    can you please elaborate how batch size affects the prediction?

  11. ali September 30, 2017 at 8:13 am #

    thanx alot for your tutorial. its very helpful.
    can you please help me with my problem

    i used your solution 3 i.e copied weights from trained model to new one
    but still while predicting with single batch test data i get same error i.e:

    “AttributeError: ‘list’ object has no attribute ‘shape'”

    my data:

    training input data has shape(5,2,3)
    test input data shape(1,2,3)

    training output data shape(5,2,5)
    expected output data shape(1,2,5)

    code:

    model = Sequential()
    model.add(LSTM(5, batch_input_shape=(5, 2, 3), unroll=True, return_sequences=True))
    model.add(Dense(5))
    model.compile(loss=’mean_absolute_error’, optimizer=’adam’)
    model.fit(x, o, nb_epoch=2000, batch_size=5, verbose=2)
    new_model = Sequential()
    new_model.add(LSTM(5, batch_input_shape=(1, 2, 3), unroll=True, return_sequences=True))
    new_model.add(Dense(5))
    old_weights = model.get_weights()
    new_model.set_weights(old_weights)
    new_model.compile(loss=’mean_absolute_error’, optimizer=’adam’)

    thanx alot in advance

    • Jason Brownlee October 1, 2017 at 9:00 am #

      Glad to hear it.

      The error suggests perhaps the input data is a list rather than a numpy array. Try converting it to a numpy array.

  12. Troy October 4, 2017 at 12:59 pm #

    I tried using the third method, saving the weights of the model then defining a new model with batch size equal one. However, when I run the new model on the same data set, I find that I’m getting different results for a lot of the results except the 1st prediction. If I train a model with batch size = 1, then creating a new model with the old model’s weights gives identical predictions. Does a model with different batch size treat the data in a fundamentally different way? Like if my batch size = 32, do predictions 1-32, 33-64, 65-96… predict using the one state for each group, while a model with batch size 1 updates the state for each and every input?

    • Jason Brownlee October 4, 2017 at 3:39 pm #

      THe batch size could impact the training, it controls the number of samples to use to estimate the error gradient before an update.

      Perhaps that is what is going on in your case. Perhaps test with different batch sizes to see how sensitive your model is to the value?

      • Michael Dipperstein October 5, 2017 at 11:25 am #

        I’m seeing similar results. If I train on one batch size and copy the weights to a model with a different batch size, the new model’s prediction error is always worse.

        I decided to simplify my dataset. Now I’m working with 50 cosine periods with 1000 points per a period and I’m predicting the next point in the series from the current point. Simplifying the dataset didn’t give me any better results.

        I’ve tried an assortment of batch sizes for the model that I train with and the model that I copy the weights to. The batch sizes don’t seem to make much of a difference.

        The good news is that copying the weights into a model with the same batch size doesn’t change anything, so I know I’m doing the copying correctly.

        • Jason Brownlee October 5, 2017 at 5:23 pm #

          It may be the stochastic nature of the neural net.

          Ensure you have a robust evaluation of your model first. I recommend this procedure:
          https://machinelearningmastery.com/evaluate-skill-deep-learning-models/

          • Michael Dipperstein October 6, 2017 at 11:03 am #

            Thank you for the lead. It was a big help.

            The model with the copied weights still performs worse, and I have better metrics to prove it.

            The good news is that I’ve also improved the model that comes out of the training, and that improvement shows up in the model with copied weights.

            Since I’m training on sequential data, I don’t allow it to be shuffled and I wasn’t shuffling it between epochs. Each epoch was just trained on the same sequence of data as the previous epoch.

            Your section on k-fold cross validation led me to try something similar that preserves the data sequence. Now I shift the start of the sequence between epochs and end up with a set of weights that seem to transfer better.

          • Jason Brownlee October 6, 2017 at 11:08 am #

            Nice one, thanks for letting me know Michael.

Leave a Reply