How to Scale Data for Long Short-Term Memory Networks in Python

The data for your sequence prediction problem probably needs to be scaled when training a neural network, such as a Long Short-Term Memory recurrent neural network.

When a network is fit on unscaled data that has a range of values (e.g. quantities in the 10s to 100s) it is possible for large inputs to slow down the learning and convergence of your network and in some cases prevent the network from effectively learning your problem.

In this tutorial, you will discover how to normalize and standardize your sequence prediction data and how to decide which to use for your input and output variables.

After completing this tutorial, you will know:

  • How to normalize and standardize sequence data in Python.
  • How to select the appropriate scaling for input and output variables.
  • Practical considerations when scaling sequence data.

Let’s get started.

How to Scale Data for Long Short-Term Memory Networks in Python

How to Scale Data for Long Short-Term Memory Networks in Python
Photo by Mathias Appel, some rights reserved.

Tutorial Overview

This tutorial is divided into 4 parts; they are:

  1. Scaling Series Data
  2. Scaling Input Variables
  3. Scaling Output Variables
  4. Practical Considerations When Scaling

Scaling Series Data in Python

There are two types of scaling of your series that you may want to consider: normalization and standardization.

These can both be achieved using the scikit-learn library.

Normalize Series Data

Normalization is a rescaling of the data from the original range so that all values are within the range of 0 and 1.

Normalization requires that you know or are able to accurately estimate the minimum and maximum observable values. You may be able to estimate these values from your available data. If your time series is trending up or down, estimating these expected values may be difficult and normalization may not be the best method to use on your problem.

A value is normalized as follows:

Where the minimum and maximum values pertain to the value x being normalized.

For example, for a dataset, we could guesstimate the min and max observable values as 30 and -10. We can then normalize any value, like 18.8, as follows:

You can see that if an x value is provided that is outside the bounds of the minimum and maximum values, that the resulting value will not be in the range of 0 and 1. You could check for these observations prior to making predictions and either remove them from the dataset or limit them to the pre-defined maximum or minimum values.

You can normalize your dataset using the scikit-learn object MinMaxScaler.

Good practice usage with the MinMaxScaler and other scaling techniques is as follows:

  • Fit the scaler using available training data. For normalization, this means the training data will be used to estimate the minimum and maximum observable values. This is done by calling the fit() function.
  • Apply the scale to training data. This means you can use the normalized data to train your model. This is done by calling the transform() function.
  • Apply the scale to data going forward. This means you can prepare new data in the future on which you want to make predictions.

If needed, the transform can be inverted. This is useful for converting predictions back into their original scale for reporting or plotting. This can be done by calling the inverse_transform() function.

Below is an example of normalizing a contrived sequence of 10 quantities.

The scaler object requires data to be provided as a matrix of rows and columns. The loaded time series data is loaded as a Pandas Series.

Running the example prints the sequence, prints the min and max values estimated from the sequence, prints the same normalized sequence, then the values back in their original scale using the inverse transform.

We can also see that the minimum and maximum values of the dataset are 10.0 and 100.0 respectively.

Need help with LSTMs for Sequence Prediction?

Take my free 7-day email course and discover 6 different LSTM architectures (with sample code).

Click to sign-up and also get a free PDF Ebook version of the course.

Start Your FREE Mini-Course Now!

Standardize Series Data

Standardizing a dataset involves rescaling the distribution of values so that the mean of observed values is 0 and the standard deviation is 1.

This can be thought of as subtracting the mean value or centering the data.

Like normalization, standardization can be useful, and even required in some machine learning algorithms when your data has input values with differing scales.

Standardization assumes that your observations fit a Gaussian distribution (bell curve) with a well behaved mean and standard deviation. You can still standardize your time series data if this expectation is not met, but you may not get reliable results.

Standardization requires that you know or are able to accurately estimate the mean and standard deviation of observable values. You may be able to estimate these values from your training data.

A value is standardized as follows:

Where the mean is calculated as:

And the standard_deviation is calculated as:

We can guesstimate a mean of 10 and a standard deviation of about 5. Using these values, we can standardize the first value of 20.7 as follows:

The mean and standard deviation estimates of a dataset can be more robust to new data than the minimum and maximum.

You can standardize your dataset using the scikit-learn object StandardScaler.

Running the example prints the sequence, prints the mean and standard deviation estimated from the sequence, prints the standardized values, then prints the values back in their original scale.

We can see that the estimated mean and standard deviation were about 5.3 and 2.7 respectively.

Scaling Input Variables

The input variables are those that the network takes on the input or visible layer in order to make a prediction.

A good rule of thumb is that input variables should be small values, probably in the range of 0-1 or standardized with a zero mean and a standard deviation of one.

Whether input variables require scaling depends on the specifics of your problem and of each variable. Let’s look at some examples.

Categorical Inputs

You may have a sequence of categorical inputs, such as letters or statuses.

Generally, categorical inputs are first integer encoded then one hot encoded. That is, a unique integer value is assigned to each distinct possible input, then a binary vector of ones and zeros is used to represent each integer value.

By definition, a one hot encoding will ensure that each input is a small real value, in this case 0.0 or 1.0.

Real-Valued Inputs

You may have a sequence of quantities as inputs, such as prices or temperatures.

If the distribution of the quantity is normal, then it should be standardized, otherwise the series should be normalized. This applies if the range of quantity values is large (10s 100s, etc.) or small (0.01, 0.0001).

If the quantity values are small (near 0-1) and the distribution is limited (e.g. standard deviation near 1) then perhaps you can get away with no scaling of the series.

Other Inputs

Problems can be complex and it may not be clear how to best scale input data.

If in doubt, normalize the input sequence. If you have the resources, explore modeling with the raw data, standardized data, and normalized and see if there is a beneficial difference.

If the input variables are combined linearly, as in an MLP [Multilayer Perceptron], then it is rarely strictly necessary to standardize the inputs, at least in theory. … However, there are a variety of practical reasons why standardizing the inputs can make training faster and reduce the chances of getting stuck in local optima.

Should I normalize/standardize/rescale the data? Neural Nets FAQ

Scaling Output Variables

The output variable is the variable predicted by the network.

You must ensure that the scale of your output variable matches the scale of the activation function (transfer function) on the output layer of your network.

If your output activation function has a range of [0,1], then obviously you must ensure that the target values lie within that range. But it is generally better to choose an output activation function suited to the distribution of the targets than to force your data to conform to the output activation function.

Should I normalize/standardize/rescale the data? Neural Nets FAQ

The following heuristics should cover most sequence prediction problems:

Binary Classification Problem

If your problem is a binary classification problem, then the output will be class values 0 and 1. This is best modeled with a sigmoid activation function on the output layer. Output values will be real values between 0 and 1 that can be snapped to crisp values.

Multi-class Classification Problem

If your problem is a multi-class classification problem, then the output will be a vector of binary class values between 0 and 1, one output per class value. This is best modeled with a softmax activation function on the output layer. Again, output values will be real values between 0 and 1 that can be snapped to crisp values.

Regression Problem

If your problem is a regression problem, then the output will be a real value. This is best modeled with a linear activation function. If the distribution of the value is normal, then you can standardize the output variable. Otherwise, the output variable can be normalized.

Other Problem

There are many other activation functions that may be used on the output layer and the specifics of your problem may add confusion.

The rule of thumb is to ensure that the network outputs match the scale of your data.

Practical Considerations When Scaling

There are some practical considerations when scaling sequence data.

  • Estimate Coefficients. You can estimate coefficients (min and max values for normalization or mean and standard deviation for standardization) from the training data. Inspect these first-cut estimates and use domain knowledge or domain experts to help improve these estimates so that they will be usefully correct on all data in the future.
  • Save Coefficients. You will need to normalize new data in the future in exactly the same way as the data used to train your model. Save the coefficients used to file and load them later when you need to scale new data when making predictions.
  • Data Analysis. Use data analysis to help you better understand your data. For example, a simple histogram can help you quickly get a feeling for the distribution of quantities to see if standardization would make sense.
  • Scale Each Series. If your problem has multiple series, treat each as a separate variable and in turn scale them separately.
  • Scale At The Right Time. It is important to apply any scaling transforms at the right time. For example, if you have a series of quantities that is non-stationary, it may be appropriate to scale after first making your data stationary. It would not be appropriate to scale the series after it has been transformed into a supervised learning problem as each column would be handled differently, which would be incorrect.
  • Scale if in Doubt. You probably do need to rescale your input and output variables. If in doubt, at least normalize your data.

Further Reading

This section lists some additional resources to consider when scaling.

Summary

In this tutorial, you discovered how to scale your sequence prediction data when working with Long Short-Term Memory recurrent neural networks.

Specifically, you learned:

  • How to normalize and standardize sequence data in Python.
  • How to select the appropriate scaling for input and output variables.
  • Practical considerations when scaling sequence data.

Do you have any questions about scaling sequence prediction data?
Ask your question in the comments and I will do my best to answer.

Develop LSTMs for Sequence Prediction Today!

Long Short-Term Memory Networks with Python

Develop Your Own LSTM models in Minutes

…with just a few lines of python code

Discover how in my new Ebook:
Long Short-Term Memory Networks with Python

It provides self-study tutorials on topics like:
CNN LSTMs, Encoder-Decoder LSTMs, generative models, data preparation, making predictions and much more…

Finally Bring LSTM Recurrent Neural Networks to
Your Sequence Predictions Projects

Skip the Academics. Just Results.

Click to learn more.


13 Responses to How to Scale Data for Long Short-Term Memory Networks in Python

  1. Jack Sheffield July 7, 2017 at 6:33 am #

    Thanks for the post Jason, nice and succinct walk-through on how to scale data. I wanted to share a great course on Experfy that covers Machine Learning, especially supervised learning that I’ve found super helpful in understanding all of this

  2. Anthony The Koala July 7, 2017 at 9:35 am #

    Dear Dr Jason,
    When making predictions using the scaled data, do you have to unscale the data, using the

    OR

    Thank you

    • Jason Brownlee July 9, 2017 at 10:34 am #

      After the prediction, yes, in order to make use if it or have error scores in the correct scale for apples to apples comparison of models.

  3. Natallia Lundqvist July 7, 2017 at 10:28 pm #

    Hi Jason, thank you once again for sharing your great ideas! I work with seq2seq application on text input of variable length with very large vocabulary (several thousand entries). Obviously, padding and one_hot_encode are necessary in this case. If one uses keras.tokenizer.texts_to_sequences(…) and then keras.tokenizer.sequences_to_matrix(sequence, mode=’binary’), one gets 2D-tensor which can not be fitted directly into LSTMs.

    For example:
    seq_test = tokenizer.texts_to_sequences(input_text_sequence)

    seq_test[:4]
    Out[16]: [[1, 2, 110], [23, 5, 150], [1, 3, 17], [8, 2, 218, 332]]

    X_test = tokenizer.sequences_to_matrix(seq_test, mode=’binary’)

    X_test[:4,:]
    Out[18]:
    array([[ 0., 1., 1., …, 0., 0., 0.],
    [ 0., 0., 0., …, 0., 0., 0.],
    [ 0., 1., 0., …, 0., 0., 0.],
    [ 0., 0., 1., …, 0., 0., 0.]])

    If one tries to pass padded sequence into “sequences_to_matrix”, an error message is generated:

    File “C:….\keras\preprocessing\text.py”, line 262, in sequences_to_matrix
    if not seq:

    ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()

    Is it so that one has to do “one_hot_encoding” manually in order to make use of LSTMs in an encode-decode manner???

    On the other hand, I get very good convergence (>99%) if I don’t do one_hot_encode and use a network architecture similar to http://machinelearningmastery.com/sequence-classification-lstm-recurrent-neural-networks-python-keras/

    The problem arise with predictions since the last Dense layer has activation=’sigmoid’, which generates values between 0 and 1. How to make predictions in the form (Out[19]: [[1, 2, 110], [23, 5, 150], [1, 3, 17], [8, 2, 218]]) without one_hot_encode input_sequence???

    The last question. If one use one_hot_encode of a sequence, embedding layer and convolution layer don’t make sense, right???

    • Jason Brownlee July 9, 2017 at 10:45 am #

      LSTM input is 3D [samples, timesteps, features]. Each sample will be one sequence of your input. Time steps are words or chars and features are the one hot encoded values.

  4. Liviu July 12, 2017 at 2:09 am #

    Hello and thank you for the tutorials ! Learned a lot from them.

    One question regarding scaling (or normalization): how can we make sure that the scaling results remains the same between different data sets? For example:
    – step 1: we use some data sets to train a model (with scaling data) and then we save the trained model for future use.
    – step 2: we import the model created at step 1 and used it to predict a prediction data set.

    But: the prediction data set must also be scaled. And more than that it must be scaled with the same scaling parameters (scaled “the same way”) used to scale the model trained at step 1. Am I wrong ?
    Or we somehow have to save the scaling object also and import it again to be used to scale the prediction data set at step 2 ?

    • Jason Brownlee July 12, 2017 at 9:49 am #

      Correct.

      It means you must estimate the scaling parameters carefully and save them for future use.

      • Lukas November 16, 2017 at 6:34 am #

        Great articles Jason! Thank you so much for your dedicated work.

        I am facing a similar problem to Liviu’s.

        How to scale the features and the target in the initial training data, supposed in the future additional data will be available and is to be used to incrementally train the model.

        If the initially available data is scaled from 0 to 1 with using the maximum value available in the data, a new maximum would shift the whole scale the model is trained on and would therefore falsify the results.

        What I already know for sure is the maximum of the target to be higher in the future due to growth, but the final magnitude is absolutely not assessable.

        Do you have any suggestions how to solve this problem without rescaling the whole dataset with the new max and respectively retraining the model on the whole dataset?

        Thanks in advance and regards,
        Lukas

        • Jason Brownlee November 16, 2017 at 10:32 am #

          You can use domain knowledge to estimate the extreme min/max values that you are ever expected to see.

          Or use the same approach and estimate mean/stdev and standardize the data instead which might be more robust to large changes in scale over time.

  5. Emmanuel July 31, 2017 at 9:48 am #

    Thanks for the good work

Leave a Reply