How to Normalize and Standardize Time Series Data in Python

Some machine learning algorithms will achieve better performance if your time series data has a consistent scale or distribution.

Two techniques that you can use to consistently rescale your time series data are normalization and standardization.

In this tutorial, you will discover how you can apply normalization and standardization rescaling to your time series data in Python.

After completing this tutorial, you will know:

  • The limitations of normalization and expectations of your data for using standardization.
  • What parameters are required and how to manually calculate normalized and standardized values.
  • How to normalize and standardize your time series data using scikit-learn in Python.

Let’s get started.

How to Normalize and Standardize Time Series Data in Python

How to Normalize and Standardize Time Series Data in Python
Photo by Sage Ross, some rights reserved.

Minimum Daily Temperatures Dataset

This dataset describes the minimum daily temperatures over 10 years (1981-1990) in the city Melbourne, Australia.

The units are in degrees Celsius and there are 3,650 observations. The source of the data is credited as the Australian Bureau of Meteorology.

Below is a sample of the first 5 rows of data, including the header row.

Below is a plot of the entire dataset taken from Data Market.

Minimum Daily Temperatures

Minimum Daily Temperatures

The dataset shows a strong seasonality component and has a nice, fine-grained detail to work with.

Download and learn more about the dataset here.

This tutorial assumes that the dataset is in your current working directory with the filename “daily-minimum-temperatures-in-me.csv“.

Note: The downloaded file contains some question mark (“?”) characters that must be removed before you can use the dataset. Open the file in a text editor and remove the “?” characters. Also remove any footer information in the file.

Stop learning Time Series Forecasting the slow way

Sign-up and get a FREE 7-day Time Series Forecasting Mini-Course

You will get:
...one lesson each day delivered to your inbox
...exclusive PDF ebook containing all lessons
...confidence and skills to work through your own projects

Download Your FREE Mini-Course

Normalize Time Series Data

Normalization is a rescaling of the data from the original range so that all values are within the range of 0 and 1.

Normalization can be useful, and even required in some machine learning algorithms when your time series data has input values with differing scales.It may be required for algorithms, like k-Nearest neighbors, which uses distance calculations and Linear Regression and Artificial Neural Networks that weight input values.

Normalization requires that you know or are able to accurately estimate the minimum and maximum observable values. You may be able to estimate these values from your available data. If your time series is trending up or down, estimating these expected values may be difficult and normalization may not be the best method to use on your problem.

A value is normalized as follows:

Where the minimum and maximum values pertain to the value x being normalized.

For example, for the temperature data, we could guesstimate the min and max observable values as 30 and -10, which are greatly over and under-estimated. We can then normalize any value like 18.8 as follows:

You can see that if an x value is provided that is outside the bounds of the minimum and maximum values, that the resulting value will not be in the range of 0 and 1. You could check for these observations prior to making predictions and either remove them from the dataset or limit them to the pre-defined maximum or minimum values.

You can normalize your dataset using the scikit-learn object MinMaxScaler.

Good practice usage with the MinMaxScaler and other rescaling techniques is as follows:

  1. Fit the scaler using available training data. For normalization, this means the training data will be used to estimate the minimum and maximum observable values. This is done by calling the fit() function,
  2. Apply the scale to training data. This means you can use the normalized data to train your model. This is done by calling the transform() function
  3. Apply the scale to data going forward. This means you can prepare new data in the future on which you want to make predictions.

If needed, the transform can be inverted. This is useful for converting predictions back into their original scale for reporting or plotting. This can be done by calling the inverse_transform() function.

Below is an example of normalizing the Minimum Daily Temperatures dataset.

The scaler requires data to be provided as a matrix of rows and columns. The loaded time series data is loaded as a Pandas Series. It must then be reshaped into a matrix of one column with 3,650 rows.

The reshaped dataset is then used to fit the scaler, the dataset is normalized, then the normalization transform is inverted to show the original values again.

Running the example prints the first 5 rows from the loaded dataset, shows the same 5 values in their normalized form, then the values back in their original scale using the inverse transform.

We can also see that the minimum and maximum values of the dataset are 0 and 26.3 respectively.

There is another type of rescaling that is more robust to new values being outside the range of expected values; this is called Standardization. We will look at that next.

Standardize Time Series Data

Standardizing a dataset involves rescaling the distribution of values so that the mean of observed values is 0 and the standard deviation is 1.

This can be thought of as subtracting the mean value or centering the data.

Like normalization, standardization can be useful, and even required in some machine learning algorithms when your time series data has input values with differing scales.

Standardization assumes that your observations fit a Gaussian distribution (bell curve) with a well behaved mean and standard deviation. You can still standardize your time series data if this expectation is not met, but you may not get reliable results.

This includes algorithms like Support Vector Machines, Linear and Logistic Regression, and other algorithms that assume or have improved performance with Gaussian data.

Standardization requires that you know or are able to accurately estimate the mean and standard deviation of observable values. You may be able to estimate these values from your training data.

A value is standardized as follows:

Where the mean is calculated as:

And the standard_deviation is calculated as:

For example, we can plot a histogram of the Minimum Daily Temperatures dataset as follows:

Running the code gives the following plot that shows a Gaussian distribution of the dataset, as assumed by standardization.

Minimum Daily Temperatures Histogram

Minimum Daily Temperatures Histogram

We can guesstimate a mean temperature of 10 and a standard deviation of about 5. Using these values, we can standardize the first value in the dataset of 20.7 as follows:

The mean and standard deviation estimates of a dataset can be more robust to new data than the minimum and maximum.

You can standardize your dataset using the scikit-learn object StandardScaler.

Below is an example of standardizing the Minimum Daily Temperatures dataset.

Running the example prints the first 5 rows of the dataset, prints the same values standardized, then prints the values back in their original scale.

We can see that the estimated mean and standard deviation were 11.1 and 4.0 respectively.

Summary

In this tutorial, you discovered how to normalize and standardize time series data in Python.

Specifically, you learned:

  • That some machine learning algorithms perform better or even require rescaled data when modeling.
  • How to manually calculate the parameters required for normalization and standardization.
  • How to normalize and standardize time series data using scikit-learn in Python.

Do you have any questions about rescaling time series data or about this post?
Ask your questions in the comments and I will do my best to answer.

Want to Develop Time Series Forecasts with Python?

Develop Your Own Forecasts in Minutes

...with just a few lines of python code

Discover how in my new Ebook:
Introduction to Time Series Forecasting With Python

It covers self-study tutorials and end-to-end projects on topics like:
Loading data, visualization, modeling, algorithm tuning, and much more...

Finally Bring Time Series Forecasting to
Your Own Projects

Skip the Academics. Just Results.

Click to learn more.

 

17 Responses to How to Normalize and Standardize Time Series Data in Python

  1. Marek December 13, 2016 at 8:48 am #

    I assume that this works like a treat for data sets that you can fit into memory … But what about very large data sets that simply would never fit into a single machine. Would you recommend other techniques?

    • Jason Brownlee December 14, 2016 at 8:22 am #

      Great question Marek.

      I would suggest estimating the parameters required (min/max for normalization and mean/stdev for standardization) and using those parameters to prepare data just in time prior to use in a model.

      Does that help?

    • Gonzalo December 14, 2016 at 10:06 am #

      IMO

      If you have kind of stream data, you need to define a range of data to evaluate.

      If you just have distributed data, you need to mapreduce.

  2. Fabio December 14, 2016 at 5:53 am #

    Hello Jason,

    thank you for your example. I am learning Python und Pandas. Why do you need to reshape the Series.values?

    # prepare data for standardization
    values = series.values
    values = values.reshape((len(values), 1))

    Bye

    • Jason Brownlee December 14, 2016 at 8:29 am #

      Great question, it’s because the sklearn tools prefer a 2D matrix and the series is 1D.

      We just need to be explicit in the numpy array about the number of rows and cols and sklearn will then not throw out a warning.

      Does that help?

      • Fabio December 15, 2016 at 12:26 am #

        Yep! Thank you Jason 🙂

  3. Barnett December 15, 2016 at 8:40 am #

    In relation to this topic, how do you usually handle variables of mixed types (e.g. a mixture of categorical, continuous, ordinal variables) in a classifier (e.g. logistic regression, SVM, etc.)? I first perform dummy coding on categorical variables, followed by mixing them with the other variables (after normalizing them to [0, 1]); not sure if this is the best practice. On the other hand, the same question for applying clustering algorithms (say, k-means, spectral clusterings). Thank you.

    • Jason Brownlee December 16, 2016 at 5:34 am #

      Hi Barnett, yes exactly as you describe.

      I try integer encodings if there is an ordinal relationship.

      For categorical variables, I use dummy (binary) variables.

      I try to make many different views/perspectives of a prediction problem, including transforms, projections and feature selection filters. I then test them all on a suite of methods and see which representations are generally better at exposing the structure of the problem. While that is running, I do the traditional careful analysis, but this automated method is often faster and results in non-intuitive results.

  4. Magnus January 5, 2017 at 2:02 am #

    I was not able to run this using the data set as is. In the csv file, there is a footer with 3 columns and some data contains questions marks. However, after removing this and replacing it works )

    • Jason Brownlee January 5, 2017 at 9:23 am #

      Thanks for the tip Magnus.

      Yes, the tutorial does assume a well-formed CSV file.

      A raw download from DataMarket does contain footer info that must be deleted.

  5. Kensu January 12, 2017 at 1:30 am #

    What is the mathmatical function to denormalize if the function
    y = (x – min) / (max – min) is our normalize function.

  6. sevenless January 31, 2017 at 8:53 pm #

    Thank you for the nice tutorial.
    I wonder how you would normalize the standard deviation for replicate measurements?
    Let’s assume that we have three measurements for each day instead of only one and that you would want to plot the temperature normalized to its mean as a time series for a single month. Would the standard deviation for each day have to be normalized as well?

    • Jason Brownlee February 1, 2017 at 10:49 am #

      Great question,

      Generally, this is a problem specific question and you can choose the period over which to standardize or normalize.

      I would prefer to pick min/max or mean/stdev that are suitable for the entire modeling period per variable.

      Try other approaches, and see how they fair on your dataset.

  7. Magnus February 17, 2017 at 3:24 am #

    Let’s say I have a time series and normalize the data in the range 0,1. I train the model and run my predictions in real time. Later, an “extreme event” occur with values higher than the max value in my training set. The prediction for that event might then saturate, giving me a lower forecast compared to the observation. How to deal with this?

    I suppose one possibility is to use e.g. extreme event analysis to estimate a future max value and use this as my max value for normalization. However, then my training data will be in a narrower range, e.g. 0 to 0.9. Of course, I can do this anyway without an analysis. My question is related to e.g. forecasts of extreme weather phenomena or earthquakes etc.

    How is it possible to forecast, accurately, an extreme event, when we don’t have this in the training set? After all, extreme events are often very important to be able to forecast.

    • Jason Brownlee February 17, 2017 at 9:56 am #

      Great question Magnus.

      This is an important consideration when scaling.

      Standardization will be more robust. Normalization will require you to estimate the limits of expected values, to detect when new input data exceeds those limits and handle that accordingly (report an error, clip, issue a warning, re-train the model with new limits, etc.).

      As for the “best” thing to do, that really depends on the domain and your project requirements.

      • Magnus February 20, 2017 at 10:41 pm #

        What about if the data is highly asymmetric with a negative (or positive) skew, and therefore far from being Gaussian?

        If I choose a NN, I assume that my data should be normalised. If I standardise the data it will still be skewed, so when using a NN is it better to transform the data to remove the skew? Or is neural networks a bad choice with skewed data?

        • Jason Brownlee February 21, 2017 at 9:36 am #

          Consider a power transform like a box-cox to make the data more Gaussian, then standardize.

Leave a Reply