[New Book] Click to get The Beginner's Guide to Data Science!
Use the offer code 20offearlybird to get 20% off. Hurry, sale ends soon!

How to Normalize, Center, and Standardize Image Pixels in Keras

The pixel values in images must be scaled prior to providing the images as input to a deep learning neural network model during the training or evaluation of the model.

Traditionally, the images would have to be scaled prior to the development of the model and stored in memory or on disk in the scaled format.

An alternative approach is to scale the images using a preferred scaling technique just-in-time during the training or model evaluation process. Keras supports this type of data preparation for image data via the ImageDataGenerator class and API.

In this tutorial, you will discover how to use the ImageDataGenerator class to scale pixel data just-in-time when fitting and evaluating deep learning neural network models.

After completing this tutorial, you will know:

  • How to configure and a use the ImageDataGenerator class for train, validation, and test datasets of images.
  • How to use the ImageDataGenerator to normalize pixel values when fitting and evaluating a convolutional neural network model.
  • How to use the ImageDataGenerator to center and standardize pixel values when fitting and evaluating a convolutional neural network model.

Kick-start your project with my new book Deep Learning for Computer Vision, including step-by-step tutorials and the Python source code files for all examples.

Let’s get started.

How to Normalize, Center, and Standardize Images With the ImageDataGenerator in Keras

How to Normalize, Center, and Standardize Images With the ImageDataGenerator in Keras
Photo by Sagar, some rights reserved.

Tutorial Overview

This tutorial is divided into five parts; they are:

  1. MNIST Handwritten Image Classification Dataset
  2. ImageDataGenerator class for Pixel Scaling
  3. How to Normalize Images With ImageDataGenerator
  4. How to Center Images With ImageDataGenerator
  5. How to Standardize Image With ImageDataGenerator

MNIST Handwritten Image Classification Dataset

Before we dive into the usage of the ImageDataGenerator class for preparing image data, we must select an image dataset on which to test the generator.

The MNIST problem, is an image classification problem comprised of 70,000 images of handwritten digits.

The goal of the problem is to classify a given image of a handwritten digit as an integer from 0 to 9. As such, it is a multiclass image classification problem.

This dataset is provided as part of the Keras library and can be automatically downloaded (if needed) and loaded into memory by a call to the keras.datasets.mnist.load_data() function.

The function returns two tuples: one for the training inputs and outputs and one for the test inputs and outputs. For example:

We can load the MNIST dataset and summarize the dataset. The complete example is listed below.

Running the example first loads the dataset into memory. Then the shape of the train and test datasets is reported.

We can see that all images are 28 by 28 pixels with a single channel for black-and-white images. There are 60,000 images for the training dataset and 10,000 for the test dataset.

We can also see that pixel values are integer values between 0 and 255 and that the mean and standard deviation of the pixel values are similar between the two datasets.

We will use this dataset to explore different pixel scaling methods using the ImageDataGenerator class in Keras.

Want Results with Deep Learning for Computer Vision?

Take my free 7-day email crash course now (with sample code).

Click to sign-up and also get a free PDF Ebook version of the course.

ImageDataGenerator Class for Pixel Scaling

The ImageDataGenerator class in Keras provides a suite of techniques for scaling pixel values in your image dataset prior to modeling.

The class will wrap your image dataset, then when requested, it will return images in batches to the algorithm during training, validation, or evaluation and apply the scaling operations just-in-time. This provides an efficient and convenient approach to scaling image data when modeling with neural networks.

The usage of the ImageDataGenerator class is as follows.

  • 1. Load your dataset.
  • 2. Configure the ImageDataGenerator (e.g. construct an instance).
  • 3. Calculate image statistics (e.g. call the fit() function).
  • 4. Use the generator to fit the model (e.g. pass the instance to the fit_generator() function).
  • 5. Use the generator to evaluate the model (e.g. pass the instance to the evaluate_generator() function).

The ImageDataGenerator class supports a number of pixel scaling methods, as well as a range of data augmentation techniques. We will focus on the pixel scaling techniques and leave the data augmentation methods to a later discussion.

The three main types of pixel scaling techniques supported by the ImageDataGenerator class are as follows:

  • Pixel Normalization: scale pixel values to the range 0-1.
  • Pixel Centering: scale pixel values to have a zero mean.
  • Pixel Standardization: scale pixel values to have a zero mean and unit variance.

The pixel standardization is supported at two levels: either per-image (called sample-wise) or per-dataset (called feature-wise). Specifically, the mean and/or mean and standard deviation statistics required to standardize pixel values can be calculated from the pixel values in each image only (sample-wise) or across the entire training dataset (feature-wise).

Other pixel scaling methods are supported, such as ZCA, brightening, and more, but wel will focus on these three most common methods.

The choice of pixel scaling is selected by specifying arguments to the ImageDataGenerator when an instance is constructed; for example:

Next, if the chosen scaling method requires that statistics be calculated across the training dataset, then these statistics can be calculated and stored by calling the fit() function.

When evaluating and selecting a model, it is common to calculate these statistics on the training dataset and then apply them to the validation and test datasets.

Once prepared, the data generator can be used to fit a neural network model by calling the flow() function to retrieve an iterator that returns batches of samples and passing it to the fit_generator() function.

If a validation dataset is required, a separate batch iterator can be created from the same data generator that will perform the same pixel scaling operations and use any required statistics calculated on the training dataset.

Once fit, the model can be evaluated by creating a batch iterator for the test dataset and calling the evaluate_generator() function on the model.

Again, the same pixel scaling operations will be performed and any statistics calculated on the training dataset will be used, if needed.

Now that we are familiar with how to use the ImageDataGenerator class for scaling pixel values, let’s look at some specific examples.

How to Normalize Images With ImageDataGenerator

The ImageDataGenerator class can be used to rescale pixel values from the range of 0-255 to the range 0-1 preferred for neural network models.

Scaling data to the range of 0-1 is traditionally referred to as normalization.

This can be achieved by setting the rescale argument to a ratio by which each pixel can be multiplied to achieve the desired range.

In this case, the ratio is 1/255 or about 0.0039. For example:

The ImageDataGenerator does not need to be fit in this case because there are no global statistics that need to be calculated.

Next, iterators can be created using the generator for both the train and test datasets. We will use a batch size of 64. This means that each of the train and test datasets of images are divided into groups of 64 images that will then be scaled when returned from the iterator.

We can see how many batches there will be in one epoch, e.g. one pass through the training dataset, by printing the length of each iterator.

We can then confirm that the pixel normalization has been performed as expected by retrieving the first batch of scaled images and inspecting the min and max pixel values.

Next, we can use the data generator to fit and evaluate a model. We will define a simple convolutional neural network model and fit it on the train_iterator for five epochs with 60,000 samples divided by 64 samples per batch, or about 938 batches per epoch.

Once fit, we will evaluate the model on the test dataset, with about 10,000 images divided by 64 samples per batch, or about 157 steps in a single epoch.

We can tie all of this together; the complete example is listed below.

Running the example first reports the min and max pixel values on the train and test sets. This confirms that indeed the raw data has pixel values in the range 0-255.

Next, the data generator is created and the iterators are prepared. We can see that we have 938 batches per epoch with the training dataset and 157 batches per epoch with the test dataset.

We retrieve the first batch from the dataset and confirm that it contains 64 images with the height and width (rows and columns) of 28 pixels and 1 channel, and that the new minimum and maximum pixel values are 0 and 1 respectively. This confirms that the normalization has had the desired effect.

The model is then fit on the normalized image data. Training does not take long on the CPU. Finally, the model is evaluated in the test dataset, applying the same normalization.

Now that we are familiar with how to use the ImageDataGenerator in general and specifically for image normalization, let’s look at examples of pixel centering and standardization.

How to Center Images With ImageDataGenerator

Another popular pixel scaling method is to calculate the mean pixel value across the entire training dataset, then subtract it from each image.

This is called centering and has the effect of centering the distribution of pixel values on zero: that is, the mean pixel value for centered images will be zero.

The ImageDataGenerator class refers to centering that uses the mean calculated on the training dataset as feature-wise centering. It requires that the statistic is calculated on the training dataset prior to scaling.

It is different to calculating of the mean pixel value for each image, which Keras refers to as sample-wise centering and does not require any statistics to be calculated on the training dataset.

We will demonstrate feature-wise centering in this section. Once the statistic is calculated on the training dataset, we can confirm the value by accessing and printing it; for example:

We can also confirm that the scaling procedure has had the desired effect by calculating the mean of a batch of images returned from the batch iterator. We would expect the mean to be a small value close to zero, but not zero because of the small number of images in the batch.

A better check would be to set the batch size to the size of the training dataset (e.g. 60,000 samples), retrieve one batch, then calculate the mean. It should be a very small value close to zero.

The complete example is listed below.

Running the example first reports the mean pixel value for the train and test datasets.

The MNIST dataset only has a single channel because the images are black and white (grayscale), but if the images were color, the mean pixel values would be calculated across all channels in all images in the training dataset, i.e. there would not be a separate mean value for each channel.

The ImageDataGenerator is fit on the training dataset and we can confirm that the mean pixel value matches our own manual calculation.

A single batch of centered images is retrieved and we can confirm that the mean pixel value is a small-ish value close to zero. The test is repeated using the entire training dataset as a the batch size, and in this case, the mean pixel value for the scaled dataset is a number very close to zero, confirming that centering is having the desired effect.

We can demonstrate centering with our convolutional neural network developed in the previous section.

The complete example with feature-wise centering is listed below.

Running the example prepares the ImageDataGenerator, centering images using statistics calculated on the training dataset.

We can see that performance starts off poor but does improve. The centered pixel values will have a range of about -227 to 227, and neural networks often train more efficiently with small inputs. Normalizing followed by centering would be a better approach in practice.

Importantly, the model is evaluated on the test dataset, where the images in the test dataset were centered using the mean value calculated on the training dataset. This is to avoid any data leakage.

How to Standardize Image With ImageDataGenerator

Standardization is a data scaling technique that assumes that the distribution of the data is Gaussian and shifts the distribution of the data to have a mean of zero and a standard deviation of one.

Data with this distribution is referred to as a standard Gaussian. It can be beneficial when training neural networks as the dataset sums to zero and the inputs are small values in the rough range of about -3.0 to 3.0 (e.g. 99.7 of the values will fall within three standard deviations of the mean).

Standardization of images is achieved by subtracting the mean pixel value and dividing the result by the standard deviation of the pixel values.

The mean and standard deviation statistics can be calculated on the training dataset, and as discussed in the previous section, Keras refers to this as feature-wise.

The statistics can also be calculated then used to standardize each image separately, and Keras refers to this as sample-wise standardization.

We will demonstrate the former or feature-wise approach to image standardization in this section. The effect will be batches of images with an approximate mean of zero and a standard deviation of one.

As with the previous section, we can confirm this with some simple experiments. The complete example is listed below.

Running the example first reports the mean and standard deviation of pixel values in the train and test datasets.

The data generator is then configured for feature-wise standardization and the statistics are calculated on the training dataset, matching what we would expect when the statistics are calculated manually.

A single batch of 64 standardized images is then retrieved and we can confirm that the mean and standard deviation of this small sample is close to the expected standard Gaussian.

The test is then repeated on the entire training dataset and we can confirm that the mean is indeed a very small value close to 0.0 and the standard deviation is a value very close to 1.0.

Now that we have confirmed that the standardization of pixel values is being performed as we expect, we can apply the pixel scaling while fitting and evaluating a convolutional neural network model.

The complete example is listed below.

Running the example configures the ImageDataGenerator class to standardize images, calculates the required statistics on the training set only, then prepares the train and test iterators for fitting and evaluating the model respectively.

Extensions

This section lists some ideas for extending the tutorial that you may wish to explore.

  • Color. Update an example to use an image dataset with color images and confirm that scaling is performed across the entire image rather than per-channel.
  • Sample-Wise. Demonstrate an example of sample-wise centering or standardization of pixel images.
  • ZCA Whitening. Demonstrate an example of using the ZCA approach to image data preparation.

If you explore any of these extensions, I’d love to know.
Post your findings in the comments below.

Further Reading

This section provides more resources on the topic if you are looking to go deeper.

API

Articles

Summary

In this tutorial, you discovered how to use the ImageDataGenerator class to scale pixel data just-in-time when fitting and evaluating deep learning neural network models.

Specifically, you learned:

  • How to configure and a use the ImageDataGenerator class for train, validation, and test datasets of images.
  • How to use the ImageDataGenerator to normalize pixel values when fitting and evaluating a convolutional neural network model.
  • How to use the ImageDataGenerator to center and standardize pixel values when fitting and evaluating a convolutional neural network model.

Do you have any questions?
Ask your questions in the comments below and I will do my best to answer.

Develop Deep Learning Models for Vision Today!

Deep Learning for Computer Vision

Develop Your Own Vision Models in Minutes

...with just a few lines of python code

Discover how in my new Ebook:
Deep Learning for Computer Vision

It provides self-study tutorials on topics like:
classification, object detection (yolo and rcnn), face recognition (vggface and facenet), data preparation and much more...

Finally Bring Deep Learning to your Vision Projects

Skip the Academics. Just Results.

See What's Inside

42 Responses to How to Normalize, Center, and Standardize Image Pixels in Keras

  1. Avatar
    Julian Loaiza April 3, 2019 at 2:17 pm #

    Thanks for the article. First question, what is the impact in the model using sample-wise or feature-wise? Second question, If I’m going to use my model for production do I need to save the mean and standard deviation when I was trainned my model with feature-wise method?

    • Avatar
      Jason Brownlee April 3, 2019 at 4:14 pm #

      From the post:

      “per-image (called sample-wise) or per-dataset (called feature-wise)”

  2. Avatar
    JG April 3, 2019 at 8:59 pm #

    Magisterial post Jason !. thanks.

    Anyways, reading your post you say “…The pixel standardization is supported at two levels: either per-image (called sample-wise) or per-dataset (called feature-wise)…” the opposite of your previous answer!

    My question is, I though ImageDataGenerator apply for data augmentation, that is to say to simulate we have more data (image) to train, and apply normalization, centering and standardization (rescaling) are parts of this more general “augmentation” methods. Anyway, I am confused about if data validation or test, have to be preprocessed in the same way as training data. i.e. It is clearly necessary to fit training data (with data augmentation) , but I though were no necessary for validation and test data …am I right? or in which way we can maintain apart validation and test data ). Thanks

    • Avatar
      Jason Brownlee April 4, 2019 at 7:54 am #

      Fixed, thanks.

      It can be used for data prep and for data aug, and for both at the same time. I am trying to show the former case in this post.

      If a statistic is calculated across samples (images), then the statistic must be calculated on the training set and used on the val/test sets.

      This can be done a few ways, but perhaps the easiest, is to have a separate instance for each dataset but fit the instance (calculate stats) using the same training dataset before getting the iterator to pass to the relevant flow() function.

  3. Avatar
    Jose April 24, 2019 at 11:02 pm #

    Awesome post!
    It is clear that flow_from_directory handles the data from the ‘directory’, so in this case, the RAM is not overwhelmed. This is good when utilizing huge datasets.
    Now, standardize with image generator requires to fit the data before, this implies that the data must be loaded in memory, so:
    How about handling huge image datasets which cannot be contained in RAM?
    How can one calculate mean and standard deviation on the huge training dataset in this case?

    • Avatar
      Jason Brownlee April 25, 2019 at 8:18 am #

      There are many solutions:

      – estimate stats from a smaller sample
      – estimate stats using progressive loading
      – use scaling that does not require global stats

      • Avatar
        Jose April 30, 2019 at 7:34 pm #

        Thanks for the answer! 🙂 Would you mind to point out where to find some “how to … in keras” regarding the mentioned solutions?

        • Avatar
          Jason Brownlee May 1, 2019 at 7:01 am #

          You can use the tutorial as a starting point and add in the additional config and test.

  4. Avatar
    Lau June 12, 2019 at 1:18 pm #

    Hi, thanks for sharing the valueable insights. I have some questions regarding on the data augmentation using with ImageGenerator. May I know how to apply the augmentation like rotation, skew, etc. with the Normalize, Center, and Standardize mentioned above?

    Once the .fit() has been applied on the training set, may I know how to apply Normalize, Center and Standardize on the validation set?

    Thank you

  5. Avatar
    najeh February 18, 2020 at 10:26 pm #

    to create a grid of 3×3 images, we use “subplot(row, column , index)”

    but in your example you have used “subplot(330 +1 +i)”

    what do you mean by this code?

    • Avatar
      Jason Brownlee February 19, 2020 at 8:04 am #

      That is the older API. Both do the same thing.

  6. Avatar
    Anthony February 24, 2020 at 6:34 am #

    Nice article, thank you!

    One question – if I center the data whilst training, I assume that I have to do the same to an image at prediction time. If so, what is the best way to do that?

    I have seen in other posts that a mean value is subtracted from the predicted image, and that mean value is the one calculated over the training data set. If that is the case, how can I obtain the mean (and possibly standard deviation) values for the training dataset?

  7. Avatar
    Ravi Theja February 25, 2020 at 7:10 am #

    Hey Jason,
    Thanks for the great article. I’m trying to understand if there is any reason to use one technique over the other. Could you give me some pointers? I’m working on gray-scale data. Thank you!

  8. Avatar
    mohmaya March 4, 2020 at 11:27 am #

    Can we do both rescale, and then (samplewise_center=True, samplewise_std_normalization=True) for an image? which techniques are mutually exclusive? What does it mean if I normalize an image and then do these two samplewise techniques, too.

    • Avatar
      Jason Brownlee March 4, 2020 at 1:34 pm #

      Yes, but it might be odd.

      It’s a good question. It might be easier to start with a thesis/idea and test whether it improves modeling, rather than enumerating all scaling methods.

  9. Avatar
    Saar April 27, 2020 at 11:05 pm #

    Thank you for writing, it has been very educational.

    One question though – after the normalization your convolution layers still used relu activation function. As you said in the article, most values now lay in [-3,3], which means the relu function loses about half the input it got. Isn’t tanh a better suited function after the normalization?

    Kind Regards,
    Saar.

    • Avatar
      Jason Brownlee April 28, 2020 at 6:46 am #

      You’re welcome.

      Typically it isn’t better in practice. Try and see for yourself. Remember, relu operates on the weighted sum, not raw inputs.

  10. Avatar
    Lars May 6, 2020 at 5:58 pm #

    Thanks for that nice tutorial!

    I was wondering how I know if I should use standardization or not. I am working on a dataset for emotion recognition with a small amount of faces but many samples using VGG-Face.

    • Avatar
      Jason Brownlee May 7, 2020 at 6:42 am #

      My best advice is to evaluate the model with and without the scaling operation and compare the results.

      Use it if it results in a model with better skill.

  11. Avatar
    Nina May 12, 2020 at 1:09 pm #

    How to Setup a flow for validation data, assume that we can fit all images into CPU memory

  12. Avatar
    Nina May 12, 2020 at 5:46 pm #

    How can I determine the number of node for intermediate dance layer, and final one,
    I try to understand the relation between the number of input neuron and output.

  13. Avatar
    nkm May 25, 2020 at 4:01 am #

    Hi Mr Jason,

    I get this error while using flow_from_directory:

    “ImageDataGenerator specifies featurewise_std_normalization, but it hasn’t been fit on any training data.”

    there is a step of train_datagen.fit(x_train), which requires data in array but my images are in directory. How can I implement featurewise_std_normalization augmentation feature of ImageDataGenerator?

  14. Avatar
    Patrick June 28, 2020 at 8:19 pm #

    What happens to image data generator when we have multiple features such as age, sex, etc? In other words when we have each image associate with age and sex.

    • Avatar
      Jason Brownlee June 29, 2020 at 6:34 am #

      ImageDataGenerator only operates on images.

      If you have other data you may need to devise a custom data generator that uses augmented images and holds the other static data unchanged.

  15. Avatar
    Abhilash M June 29, 2020 at 11:57 am #

    Great Insight into the data aug and data prep Sir. It cleared out some doubts I had about ImageDataGenerator Thank you. I have question regarding featurewise_std_normalization and samplewie_std_normalisation. Please correct me If I am wrong, for featurewise_std_normalization the entire dataset of images are assumed to be having a Normal Distribution, Hence mean and std are computed over entire dataset and applied to every Pixel value. But in samplewise_std_normalisation its is done for each Imgae. My Question is what is the use of having different normalization one with-respect-to entire dataset of images and other with single image. Is there any specific impact that each of them have on training the model?

  16. Avatar
    Nitin July 5, 2020 at 9:58 pm #

    /opt/conda/lib/python3.7/site-packages/keras_preprocessing/image/image_data_generator.py:720: UserWarning: This ImageDataGenerator specifies featurewise_center, but it hasn’t been fit on any training data. Fit it first by calling .fit(numpy_data).
    warnings.warn(‘This ImageDataGenerator specifies ‘

    It throwing me this warning, after I mention feature_wise_center and standardize. Any idea why?

    • Avatar
      Jason Brownlee July 6, 2020 at 6:33 am #

      You need to fit the data generator prior to using it when using that method.

  17. Avatar
    simin November 25, 2020 at 6:03 am #

    Thank you for your post. I was wondering if there is any preprocessing methods for xray images?

    • Avatar
      Jason Brownlee November 25, 2020 at 6:49 am #

      I’m sure there is, it’s not my area of expertise, sorry. I recommend checking the literature.

  18. Avatar
    Douglas Bahiense October 29, 2022 at 10:58 pm #

    Hi Jason,

    unfortunatelly tf.keras.preprocessing is now deprecated and not recommended for new code. Are you thinking of updating the post or creating a new one on using module tf.keras.utils and tf.keras.utils.text_dataset_from_directory?

    Cheers!

Leave a Reply