How to Visualize Filters and Feature Maps in Convolutional Neural Networks

Deep learning neural networks are generally opaque, meaning that although they can make useful and skillful predictions, it is not clear how or why a given prediction was made.

Convolutional neural networks, have internal structures that are designed to operate upon two-dimensional image data, and as such preserve the spatial relationships for what was learned by the model. Specifically, the two-dimensional filters learned by the model can be inspected and visualized to discover the types of features that the model will detect, and the activation maps output by convolutional layers can be inspected to understand exactly what features were detected for a given input image.

In this tutorial, you will discover how to develop simple visualizations for filters and feature maps in a convolutional neural network.

After completing this tutorial, you will know:

  • How to develop a visualization for specific filters in a convolutional neural network.
  • How to develop a visualization for specific feature maps in a convolutional neural network.
  • How to systematically visualize feature maps for each block in a deep convolutional neural network.

Kick-start your project with my new book Deep Learning for Computer Vision, including step-by-step tutorials and the Python source code files for all examples.

Let’s get started.

How to Visualize Filters and Feature Maps in Convolutional Neural Networks

How to Visualize Filters and Feature Maps in Convolutional Neural Networks
Photo by Mark Kent, some rights reserved.

Tutorial Overview

This tutorial is divided into four parts; they are:

  1. Visualizing Convolutional Layers
  2. Pre-fit VGG Model
  3. How to Visualize Filters
  4. How to Visualize Feature Maps

Visualizing Convolutional Layers

Neural network models are generally referred to as being opaque. This means that they are poor at explaining the reason why a specific decision or prediction was made.

Convolutional neural networks are designed to work with image data, and their structure and function suggest that should be less inscrutable than other types of neural networks.

Specifically, the models are comprised of small linear filters and the result of applying filters called activation maps, or more generally, feature maps.

Both filters and feature maps can be visualized.

For example, we can design and understand small filters, such as line detectors. Perhaps visualizing the filters within a learned convolutional neural network can provide insight into how the model works.

The feature maps that result from applying filters to input images and to feature maps output by prior layers could provide insight into the internal representation that the model has of a specific input at a given point in the model.

We will explore both of these approaches to visualizing a convolutional neural network in this tutorial.

Want Results with Deep Learning for Computer Vision?

Take my free 7-day email crash course now (with sample code).

Click to sign-up and also get a free PDF Ebook version of the course.

Pre-fit VGG Model

We need a model to visualize.

Instead of fitting a model from scratch, we can use a pre-fit prior state-of-the-art image classification model.

Keras provides many examples of well-performing image classification models developed by different research groups for the ImageNet Large Scale Visual Recognition Challenge, or ILSVRC. One example is the VGG-16 model that achieved top results in the 2014 competition.

This is a good model to use for visualization because it has a simple uniform structure of serially ordered convolutional and pooling layers, it is deep with 16 learned layers, and it performed very well, meaning that the filters and resulting feature maps will capture useful features. For more information on this model, see the 2015 paper “Very Deep Convolutional Networks for Large-Scale Image Recognition.”

We can load and summarize the VGG16 model with just a few lines of code; for example:

Running the example will load the model weights into memory and print a summary of the loaded model.

If this is the first time that you have loaded the model, the weights will be downloaded from the internet and stored in your home directory. These weights are approximately 500 megabytes and may take a moment to download depending on the speed of your internet connection.

We can see that the layers are well named, organized into blocks, and named with integer indexes within each block.

Now that we have a pre-fit model, we can use it as the basis for visualizations.

How to Visualize Filters

Perhaps the simplest visualization to perform is to plot the learned filters directly.

In neural network terminology, the learned filters are simply weights, yet because of the specialized two-dimensional structure of the filters, the weight values have a spatial relationship to each other and plotting each filter as a two-dimensional image is meaningful (or could be).

The first step is to review the filters in the model, to see what we have to work with.

The model summary printed in the previous section summarizes the output shape of each layer, e.g. the shape of the resulting feature maps. It does not give any idea of the shape of the filters (weights) in the network, only the total number of weights per layer.

We can access all of the layers of the model via the model.layers property.

Each layer has a layer.name property, where the convolutional layers have a naming convolution like block#_conv#, where the ‘#‘ is an integer. Therefore, we can check the name of each layer and skip any that don’t contain the string ‘conv‘.

Each convolutional layer has two sets of weights.

One is the block of filters and the other is the block of bias values. These are accessible via the layer.get_weights() function. We can retrieve these weights and then summarize their shape.

Tying this together, the complete example of summarizing the model filters is listed below.

Running the example prints a list of layer details including the layer name and the shape of the filters in the layer.

We can see that all convolutional layers use 3×3 filters, which are small and perhaps easy to interpret.

An architectural concern with a convolutional neural network is that the depth of a filter must match the depth of the input for the filter (e.g. the number of channels).

We can see that for the input image with three channels for red, green and blue, that each filter has a depth of three (here we are working with a channel-last format). We could visualize one filter as a plot with three images, one for each channel, or compress all three down to a single color image, or even just look at the first channel and assume the other channels will look the same. The problem is, we then have 63 other filters that we might like to visualize.

We can retrieve the filters from the first layer as follows:

The weight values will likely be small positive and negative values centered around 0.0.

We can normalize their values to the range 0-1 to make them easy to visualize.

Now we can enumerate the first six filters out of the 64 in the block and plot each of the three channels of each filter.

We use the matplotlib library and plot each filter as a new row of subplots, and each filter channel or depth as a new column.

Tying this together, the complete example of plotting the first six filters from the first hidden convolutional layer in the VGG16 model is listed below.

Running the example creates a figure with six rows of three images, or 18 images, one row for each filter and one column for each channel

We can see that in some cases, the filter is the same across the channels (the first row), and in others, the filters differ (the last row).

The dark squares indicate small or inhibitory weights and the light squares represent large or excitatory weights. Using this intuition, we can see that the filters on the first row detect a gradient from light in the top left to dark in the bottom right.

Plot of the First 6 Filters From VGG16 With One Subplot per Channel

Plot of the First 6 Filters From VGG16 With One Subplot per Channel

Although we have a visualization, we only see the first six of the 64 filters in the first convolutional layer. Visualizing all 64 filters in one image is feasible.

Sadly, this does not scale; if we wish to start looking at filters in the second convolutional layer, we can see that again we have 64 filters, but each has 64 channels to match the input feature maps. To see all 64 channels in a row for all 64 filters would require (64×64) 4,096 subplots in which it may be challenging to see any detail.

How to Visualize Feature Maps

The activation maps, called feature maps, capture the result of applying the filters to input, such as the input image or another feature map.

The idea of visualizing a feature map for a specific input image would be to understand what features of the input are detected or preserved in the feature maps. The expectation would be that the feature maps close to the input detect small or fine-grained detail, whereas feature maps close to the output of the model capture more general features.

In order to explore the visualization of feature maps, we need input for the VGG16 model that can be used to create activations. We will use a simple photograph of a bird. Specifically, a Robin, taken by Chris Heald and released under a permissive license.

Download the photograph and place it in your current working directory with the filename ‘bird.jpg‘.

Robin, by Chris Heald

Robin, by Chris Heald

Next, we need a clearer idea of the shape of the feature maps output by each of the convolutional layers and the layer index number so that we can retrieve the appropriate layer output.

The example below will enumerate all layers in the model and print the output size or feature map size for each convolutional layer as well as the layer index in the model.

Running the example, we see the same output shapes as we saw in the model summary, but in this case only for the convolutional layers.

We can use this information and design a new model that is a subset of the layers in the full VGG16 model. The model would have the same input layer as the original model, but the output would be the output of a given convolutional layer, which we know would be the activation of the layer or the feature map.

For example, after loading the VGG model, we can define a new model that outputs a feature map from the first convolutional layer (index 1) as follows.

Making a prediction with this model will give the feature map for the first convolutional layer for a given provided input image. Let’s implement this.

After defining the model, we need to load the bird image with the size expected by the model, in this case, 224×224.

Next, the image PIL object needs to be converted to a NumPy array of pixel data and expanded from a 3D array to a 4D array with the dimensions of [samples, rows, cols, channels], where we only have one sample.

The pixel values then need to be scaled appropriately for the VGG model.

We are now ready to get the feature map. We can do this easy by calling the model.predict() function and passing in the prepared single image.

We know the result will be a feature map with 224x224x64. We can plot all 64 two-dimensional images as an 8×8 square of images.

Tying all of this together, the complete code example of visualizing the feature map for the first convolutional layer in the VGG16 model for a bird input image is listed below.

Running the example first summarizes the new, smaller model that takes an image and outputs a feature map.

Remember: this model is much smaller than the VGG16 model, but still uses the same weights (filters) in the first convolutional layer as the VGG16 model.

Next, a figure is created that shows all 64 feature maps as subplots.

We can see that the result of applying the filters in the first convolutional layer is a lot of versions of the bird image with different features highlighted.

For example, some highlight lines, other focus on the background or the foreground.

Visualization of the Feature Maps Extracted From the First Convolutional Layer in the VGG16 Model

Visualization of the Feature Maps Extracted From the First Convolutional Layer in the VGG16 Model

This is an interesting result and generally matches our expectation. We could update the example to plot the feature maps from the output of other specific convolutional layers.

Another approach would be to collect feature maps output from each block of the model in a single pass, then create an image of each.

There are five main blocks in the image (e.g. block1, block2, etc.) that end in a pooling layer. The layer indexes of the last convolutional layer in each block are [2, 5, 9, 13, 17].

We can define a new model that has multiple outputs, one feature map output for each of the last convolutional layer in each block; for example:

Making a prediction with this new model will result in a list of feature maps.

We know that the number of feature maps (e.g. depth or number of channels) in deeper layers is much more than 64, such as 256 or 512. Nevertheless, we can cap the number of feature maps visualized at 64 for consistency.

Tying these changes together, we can now create five separate plots for each of the five blocks in the VGG16 model for our bird photograph. The complete listing is provided below.

Running the example results in five plots showing the feature maps from the five main blocks of the VGG16 model.

We can see that the feature maps closer to the input of the model capture a lot of fine detail in the image and that as we progress deeper into the model, the feature maps show less and less detail.

This pattern was to be expected, as the model abstracts the features from the image into more general concepts that can be used to make a classification. Although it is not clear from the final image that the model saw a bird, we generally lose the ability to interpret these deeper feature maps.

Visualization of the Feature Maps Extracted From Block 1 in the VGG16 Model

Visualization of the Feature Maps Extracted From Block 1 in the VGG16 Model

Visualization of the Feature Maps Extracted From Block 2 in the VGG16 Model

Visualization of the Feature Maps Extracted From Block 2 in the VGG16 Model

Visualization of the Feature Maps Extracted From Block 3 in the VGG16 Model

Visualization of the Feature Maps Extracted From Block 3 in the VGG16 Model

Visualization of the Feature Maps Extracted From Block 4 in the VGG16 Model

Visualization of the Feature Maps Extracted From Block 4 in the VGG16 Model

Visualization of the Feature Maps Extracted From Block 5 in the VGG16 Model

Visualization of the Feature Maps Extracted From Block 5 in the VGG16 Model

Further Reading

This section provides more resources on the topic if you are looking to go deeper.

Books

API

Articles

Summary

In this tutorial, you discovered how to develop simple visualizations for filters and feature maps in a convolutional neural network.

Specifically, you learned:

  • How to develop a visualization for specific filters in a convolutional neural network.
  • How to develop a visualization for specific feature maps in a convolutional neural network.
  • How to systematically visualize feature maps for each block in a deep convolutional neural network.

Do you have any questions?
Ask your questions in the comments below and I will do my best to answer.

Develop Deep Learning Models for Vision Today!

Deep Learning for Computer Vision

Develop Your Own Vision Models in Minutes

...with just a few lines of python code

Discover how in my new Ebook:
Deep Learning for Computer Vision

It provides self-study tutorials on topics like:
classification, object detection (yolo and rcnn), face recognition (vggface and facenet), data preparation and much more...

Finally Bring Deep Learning to your Vision Projects

Skip the Academics. Just Results.

See What's Inside

204 Responses to How to Visualize Filters and Feature Maps in Convolutional Neural Networks

  1. Avatar
    Pepe May 6, 2019 at 7:26 pm #

    Thanks a lot, sir!! It helps my thesis manuscript in finding feature maps in each layer in my model. By the way, I have a question sir about how to display in values (not an image) in each filters and biases in every convolutional layer in my CNN model?

    • Avatar
      Jason Brownlee May 7, 2019 at 6:15 am #

      I’m happy to hear that.

      You could print the arrays and inspect the values.

  2. Avatar
    Pepe May 7, 2019 at 12:07 pm #

    Okay sir thank you 🙂

  3. Avatar
    Pepe May 7, 2019 at 12:16 pm #

    Sir can I make a request, I would like to display every feature maps in every convolutional layer in my model and I have a problem of my code in accessing all feature maps in every layer. Any suggestion sir? I hope your response sir and thank you.

    • Avatar
      Sreeni Jilla August 1, 2019 at 12:17 pm #

      Excellent explanation sir

  4. Avatar
    Pepe May 7, 2019 at 12:43 pm #

    index = [0, 1, 6, 9, 14, 17]
    outputs = [self.model.layers[i].output for i in index]
    self.model = Model(inputs=self.model.inputs, outputs=outputs)
    featureMaps = self.model.predict(self.testImage)

    print((np.shape(featureMaps[0][0])))
    print(“FeatureMapsLen: “+str(np.shape(featureMaps[0]))[2])
    numOfFeaturemaps = (np.shape(featureMaps[0][0]))[2]

    print(“numOfFeatureMaps: “+str(numOfFeaturemaps)

    fig=plt.figure(figsize=(16,16))
    subplotNum=int(np.ceil(np.sqrt(numOfFeaturemaps)))
    for i in range(int(numOfFeaturemaps)):
    idx = fig.add_subplot(subplotNum, subplotNum, i+1)
    idx.imshow(featureMaps[0, :, :, i], cmap=’viridis’) #I’m have been stack here for long sir jason!!
    plt.xticks(np.array([]))
    plt.yticks(np.array([]))
    plt.tight_layout()

    plt.savefig(“featureMaps/featuremaps@Layer{}”.format(self.layerNum) + ‘.png’)
    outputImg = QtGui.QPixmap(“featureMaps/featuremaps@Layer{}”.format(self.layerNum) + ‘.png’)
    self.userInterface.labelImageContainer.setScaledContents(False)#Fixed display
    self.userInterface.labelImageContainer.setPixmap(outputImg)

  5. Avatar
    christian May 7, 2019 at 4:05 pm #

    LiME is known to “explain” the results of a classification problem. Can we use the filters that you have explained in order to explain the results of a segmentation problem?

    • Avatar
      Jason Brownlee May 8, 2019 at 6:41 am #

      Perhaps. I have not seen any work on the topic.

      Perhaps try searching on scholar.google.com.

  6. Avatar
    Shah May 7, 2019 at 4:29 pm #

    great explanation. The best part is step by step explanation with code. Really helpful 🙂

  7. Avatar
    Hamed May 10, 2019 at 8:58 am #

    For no apparent reason, I have my Keras via TensorFlow so I have to modify, for instance, this line of code:
    “from keras.applications.vgg16 import VGG16”
    to
    “from tensordlow.keras.applications.vgg16 import VGG16”
    and when I loaded it for the first time it showed it was downloading from Github but then it is now training! Is this normal?

    Thanks!

    • Avatar
      Jason Brownlee May 10, 2019 at 1:41 pm #

      It is downloaded the first time. That is normal.

      • Avatar
        Hamed May 11, 2019 at 6:42 am #

        Right! But I had a difficulty to download it and after 33Mb I got disconnection from remote server error so I opened vgg16.py located in my tensorflow/python/keras/applications, got the link, downloaded manually and change the default from ‘imagenet’ to the path that the manually downloaded file was located (I transferred it to ‘applications’ folder but still asked me the whole path and not only the filename: ‘vgg16_weights_tf_dim_ordering_tf_kernels.h5’) so it worked out for me. Would you mind if you kindly tell me why it asked the whole path? It should recognized its current path as it’s running in it.

        • Avatar
          Jason Brownlee May 12, 2019 at 6:36 am #

          I don’t know why it asked for the full path, sorry.

        • Avatar
          TFuser January 28, 2020 at 6:02 am #

          Hi Hamed, I’ll just note that TensorFlow gets updated so rapidly that,what was true yesterday about a specific niche detail about the TensorFlow implementation, might not be True the next day. Sometimes the only available documentation is the source code for your specific version. You can read the source code if it’s worth it, or ask for help on stackoverflow.com or github.com/tensorflow but it is often not worth it as updating the TF version will fix the problem at hand.

  8. Avatar
    Mike May 10, 2019 at 12:24 pm #

    Thanks for the lesson!!

    I would love to see a detailed description of how to create class activation maps.

    I’ve been using some of the code from your books to train a CNN to recognize tears of the anterior crucial efforts ligament. I’d love to see just what parts of the image my model is using to make its decisions.

    • Avatar
      Mike May 10, 2019 at 12:25 pm #

      Anterior cruciate ligament, that is.

    • Avatar
      Jason Brownlee May 10, 2019 at 1:42 pm #

      Great suggestion, thanks.

      Well done on your application Mike!

      • Avatar
        M.LABENI May 11, 2019 at 7:46 am #

        Thanks, that’s very kind of you .

  9. Avatar
    ganga May 17, 2019 at 3:29 am #

    Hello Jason,

    Thanks for your code..It really helpful.
    I have a query here.
    1. We are using single image in this model instead is it possible to use batch of images to visualise them in the model ?
    2. I am a newbie. Don’t mind me for a silly question here please. Is it possible to view outputs at fully connected layers like Conv layers we did here?
    3. Is there any limit on number of neurons used in dense layers?

    • Avatar
      Jason Brownlee May 17, 2019 at 5:59 am #

      You would process one image at a time, if you are only looking at activation maps.

      You can review activations of any layer, but Dense layers will not form images, it will be noise.

      The only limit is the mount of RAM you have.

  10. Avatar
    SHAHEEN ALHIRMIZY June 1, 2019 at 8:38 pm #

    How to save feature maps as png or pdf

  11. Avatar
    Bram June 17, 2019 at 3:21 am #

    Hey love the detailed description of what you have done!

    I am trying to replicate the same but then for a pytorch model.

    So the models look different and I cannot use the same functions to create the feature map.

    Any change you did the same for a pytorch model or maybe give me some advise on how to?

    Currently I am complete stuck on how to do this.

    • Avatar
      Jason Brownlee June 17, 2019 at 8:26 am #

      Sorry, I don’t have any pytorch examples, I cannot give you good off the cuff advice.

  12. Avatar
    Yu June 22, 2019 at 10:59 am #

    Hi Jason,

    Hwo to fix the following issue ?
    NameError: name ‘Model’ is not defined

    after execute
    model = Model(inputs=model.inputs, outputs=model.layers[1].output)

    Thanks.

  13. Avatar
    ertiga June 26, 2019 at 6:12 pm #

    Hi Jason, another nice posting from you.

    I have a stupid question: I understand that layer closer to the input will learn local feature while layer close to the output will learn global feature. For example: I have an image of face and I feed this image to my VGG16 network. When I visualize the filter, I expect that the earlier will draw “eyebrow”, “nose” and last layer will describe “face”, but I am totally wrong.

    So, I thought local feature = “eyebrow”, “nose”, but activation maps of first filter describe “face” (global feature).

    Can you explain to me about this matter? Thanks.

  14. Avatar
    ertiga June 26, 2019 at 7:39 pm #

    Another question, how can a model infer the object to be, for example, cat or dog if the last convolution layer is unclear/less detail even for human? Thanks.

    • Avatar
      Jason Brownlee June 27, 2019 at 7:48 am #

      The model has a classifier layer at the output to interpret the high order features.

  15. Avatar
    Vinamra Rai July 9, 2019 at 3:36 pm #

    Hi Jason
    I am using ResNet50 instead of VGG16 but while executing the following code I get an error:
    ValueError: not enough values to unpack (expected 2, got 0)

    from keras.applications.resnet50 import ResNet50
    from matplotlib import pyplot
    model = ResNet50()

    for layer in model.layers:
    if ‘conv’ not in layer.name:
    continue
    # get filter weights
    filters, biases = layer.get_weights()
    print(layer.name, filters.shape)

    The error is related to the line: filters, biases = layer.get_weights()

    • Avatar
      Jason Brownlee July 10, 2019 at 7:59 am #

      The resnet architecture is more complex, you will have to debug this change.

      • Avatar
        Vinamra Rai July 12, 2019 at 7:52 pm #

        Thanks, Jason. Figured it out eventually.

        • Avatar
          Jason Brownlee July 13, 2019 at 6:54 am #

          I’m happy to hear that.

        • Avatar
          sahar December 6, 2019 at 8:05 pm #

          would you share your code please?

        • Avatar
          Shira January 13, 2023 at 6:28 pm #

          Please, can you share how you solve it. Thank you.

  16. Avatar
    SHAHNA August 7, 2019 at 4:19 pm #

    You are awesome.You made the concepts clear.THANK YOU SO MUCH

  17. Avatar
    Simone August 8, 2019 at 5:21 pm #

    Thank you for your great articles. I often read your blog from ‘random’ google searches.
    You are awesome!

  18. Avatar
    Ali R. Memon August 9, 2019 at 9:29 pm #

    One of the most difficult and hot topics explained in a very simple and informative way. I really appreciate the way you explained. One of my favorite blogs ever. Thanks, Jason!

  19. Avatar
    guido August 22, 2019 at 2:02 am #

    Is it possible to do that when working on time series? I was looking at these examples (https://machinelearningmastery.com/how-to-develop-rnn-models-for-human-activity-recognition-time-series-classification/) and I would like to try to explain something about what the model is understanding. I extrapolated the output of the Conv layers and I have visualized it for each different class but I would like to know what are your thoughts about that.

  20. Avatar
    Hemanth Kumar September 6, 2019 at 2:22 pm #

    It is one of the best tutorial for beginner ,,thank you sir
    sir I have one doubt in cov2d function “filter” one of the parameter ,what value it will take as the matrix value which is matrix multiplication with the convolution image (I seen this in vgg16() function) is this take labelled matrix value (if it is in supervised)

    • Avatar
      Jason Brownlee September 7, 2019 at 5:13 am #

      Thanks, I’m glad it helped.

      The weights or filters will take on weights learned during the training process.

      • Avatar
        Hemanth Kumar September 7, 2019 at 3:39 pm #

        Thank you sir , and i also wants to know how the weights are updated for each layer, is there any method to update it

        note: i am trying to understand the architecture model https://arxiv.org/pdf/1511.00561 segnet and also each and every functions , thank you sir

        • Avatar
          Jason Brownlee September 8, 2019 at 5:14 am #

          Weights are updated via backpropagation.

          Sorry I am not familiar with that paper, perhaps contact the author.

  21. Avatar
    Sonika September 9, 2019 at 6:05 pm #

    Why is there a need to make a new model so as to see the output from each layer? Instead can we not directly use

    model = VGG16()
    model.predict(img)
    for i in enumerate(model.layers):
    model.layers[i].output

    so as to visualize the output per layer of the VGG model directly, without having to create a new model?

    • Avatar
      Jason Brownlee September 10, 2019 at 5:38 am #

      It is not really a new model, just a view on the existing model.

  22. Avatar
    Rango September 27, 2019 at 7:50 pm #

    Thank you for your great effort sir!!
    CNN is also used for text classification. So can we use same technique to visualize feature map and filter for text classification(not for plot image).For example which words or features that the model used to discriminate class.Thank in advance

    • Avatar
      Jason Brownlee September 28, 2019 at 6:16 am #

      I don’t see why not, it’s a great idea!

      Let me know how you go.

  23. Avatar
    Bruno Barre October 2, 2019 at 6:09 pm #

    Hello Jason Brownlee!
    As usual a great thanks for your work, a golden mine of clear information!

    I am currently using GradCam, to see the most activating pixels convolutions after convolutions. But for example, if we take just a simple ConvNet with one conv layer, for a task such (Fashion) MNIST dataset, which would already give decent results.

    With GradCamm, Saliency Maps and other conv visualisation techniques, we could only “see” one image at a time. But how could we “do stastistics” on a whole dataset. For example, to see how a “general digit 1” is seen by the network.

    I have searched for articles/papers, but couldn’t find anaything. I know visualisation techniques are quite recents, but they still are only able to process one image at a time.

    Would you have some interesting papers about it?

    Thanks again for your kindness and precious work!

    • Avatar
      Jason Brownlee October 3, 2019 at 6:41 am #

      What types of statistics do you want to do on the whole dataset exactly?
      What question would you be answering?

      There are many good papers on the topic, perhaps search on scholar.google.com

      • Avatar
        Bruno Barre October 15, 2019 at 2:25 am #

        Yes I searched on Google/Google Scholar to see what had already been done, but perharps I did not use the good keywords…

        For example, I searched for (Grad)CAM/Saliency Maps statistics. I don’t have a clear goal for my searhc, but as I said I am interested in any method that could transform the “image-per-image” explanation to a technique that allows to perform statistics.

        For example I tried to just compute the mean of all GradCam for all images of a given predicted class with MNIST (for a fixed conv layer), and of course I could see a “global” form of the digit. But that is just a really simple thing to do..

        It would be really kind of you if you could give a paper or better keywords to searched for on Google Scholar.

        Thanks again for your time and patience!

        • Avatar
          Jason Brownlee October 15, 2019 at 6:18 am #

          Perhaps skim through the top CNN viz papers, if nothing pops up, perhaps you have to develop something from scratch.

          Perhaps sketch what you want with some numpy examples to confirm the question you’re asking about the data/model is tractable.

  24. Avatar
    Song Han October 9, 2019 at 11:43 pm #

    No breakpoint-continue? If the downloading of the model is interrupted, then it will do downloading from the very beginning. Is it possible to download only the remaining? will save a lot of time. Thanks.

  25. Avatar
    Arpit Dhuriya November 4, 2019 at 2:42 pm #

    Sir can you guide me how to lexicographically sort the feature maps. I need it copy move forgery detection.

    • Avatar
      Jason Brownlee November 5, 2019 at 6:47 am #

      There is no lexicon (words) to sort. What do you mean exactly?

  26. Avatar
    Satyaki Mukherjee November 7, 2019 at 7:58 pm #

    Sir,
    This is a wonderful explanation.
    I am doing this with a custom dataset with my own model, which is a slight variation of resnet.
    What do I do with the part where pixel values are scaled with img = preprocess_input(img)
    I cannot import keras.application.Resnet50 as dataset is custom made.

    • Avatar
      Jason Brownlee November 8, 2019 at 6:39 am #

      Thanks.

      You must prepare pixels in the way that the resnet expects. You can do this with a helper function in Keras or manually.

  27. Avatar
    reza Darooei November 12, 2019 at 8:42 pm #

    Hi Jason Thanks for your awesome Tutorial
    I have a question :
    I want to save my extracted features after fully concocted layer before softamx how can I do this?
    let’s explain more my problem in easy way I want to get an image then instead of using pixels as features I want to extract them from CNN ,after convolution layer before softmax,on the other hand I want to change classification method instead of using softamax for example using KNN or SVM,do you have any idea how can I do this?

  28. Avatar
    Ahseb November 28, 2019 at 9:57 am #

    Dear Brownlee,

    You clarify hard problems step by step and turn them into an easy to understand issue. Thank you very much for your effort. I also appreciate that you share your knowledge and save a lot of time of us. I am sure that, anyone who interested in ml, visit your website at least once in a life. So, please keep on helping us.

    My question is that I want to apply this visualization method to resnet model trained with timeseries instead of images.
    Do you think, is it reasonable to apply for time series?

    • Avatar
      Jason Brownlee November 28, 2019 at 1:33 pm #

      Thanks!

      No, I don’t think it would be appropriate for time series. The visualization is suited for images.

      • Avatar
        Ahseb November 28, 2019 at 10:10 pm #

        I am a bit confused about whether we can use this method for timeseries or not
        You replied Rango’s question as

        “I don’t see why not, it’s a great idea!

        Let me know how you go.”

        Rango’s question:

        Thank you for your great effort sir!!
        CNN is also used for text classification. So can we use same technique to visualize feature map and filter for text classification(not for plot image).For example which words or features that the model used to discriminate class.Thank in advance

        • Avatar
          Jason Brownlee November 29, 2019 at 6:49 am #

          I don’t think so, but I don’t want to rule anything out.

          • Avatar
            Ahseb November 29, 2019 at 8:19 am #

            Oh , I got it. If it won’t be so many questions, what are your concerns, why do you think it is not that proper?

          • Avatar
            Jason Brownlee November 29, 2019 at 1:41 pm #

            Images are a visual medium, it makes sense to visualize how the models “sees” the inputs.

  29. Avatar
    shiv December 5, 2019 at 9:49 pm #

    Thank you

  30. Avatar
    sucanthudu December 29, 2019 at 4:01 pm #

    Dear Sir,

    Thanks for the very clear and good explanation about visualising cnn filters and feature maps.

    I am very confused on one part. My question is that each block output of CNN layers are of different down sampled output sizes.

    For Example:
    block1_conv2 (?, 224, 224, 64) input image shape
    block2_conv1 (?, 112, 112, 128) down sampled output size.
    and soon…..

    But when we are visualising the intermediate layers output we are getting the output image with the shape of (224,224,3).

    1.Why we are not getting the downsampled output image with shape of (112, 112, 3)?
    2. Is it possible to visualize the actual downsampled output images from all intermediate layers?

    please kindly guide me and also help me with some code snippets.

    • Avatar
      Jason Brownlee December 30, 2019 at 5:59 am #

      You’re welcome.

      We do have a visualization at each block with different sizes.

      This can be seen in the output images and when we print the shape of the output of each block.

  31. Avatar
    James January 2, 2020 at 11:48 am #

    the example model is in format, is it possible for the code read .npy format?

    • Avatar
      James January 2, 2020 at 11:49 am #

      example is .h5

      • Avatar
        Jason Brownlee January 3, 2020 at 7:13 am #

        You can save the model anyway you wish.

        The built-in library uses h5 format.

        I do not have an example of using custom code to save the model weights.

    • Avatar
      Jason Brownlee January 3, 2020 at 7:12 am #

      Sorry, I don’t understand. Perhaps you can elaborate?

  32. Avatar
    sucanthudu January 2, 2020 at 10:20 pm #

    Dear Sir

    The shape of the output at each block was as expected. I have few doubts

    1. In a sequential cnn operation the input size for example (224,224) will be preserved in each blocks?
    2. In this article why we are not following the sequential cnn operation flow of visualization in each blocks?
    3. Every time for visualizing the intermediate block layers why we are making the input images to the size of (224,224) ? For example vggnet expects the input shape to be 224,224 at the first conv block layer and after that in the next successive blocks what will be input image and its size, Whether we have to give the downsampled image(for example: (112,112) or (56,56) or (28,28) and soon) as the input to the successive conv blocks or how?
    getting confused here.

    please kindly guide me.

  33. Avatar
    geetha January 3, 2020 at 7:10 pm #

    Sir,
    is the first convolutional layer output feature map is the input to next convolutional layer and what is the input size for second and other convolutional layers.

    • Avatar
      Jason Brownlee January 4, 2020 at 8:28 am #

      Yes, models a sequence of layers connected linearly.

  34. Avatar
    sucanthudu January 3, 2020 at 7:41 pm #

    Dear sir

    For visualizing purpose from any of the intermediate block layers why we are making the input images to the size of (224,224)? Why we are not giving the previous layer output as the input to the next layer?

    please kindly guide me.

    thanking you

    • Avatar
      Jason Brownlee January 4, 2020 at 8:29 am #

      Perhaps review the output of the model summary to see the order of layers and their output shapes.

  35. Avatar
    Noushin January 15, 2020 at 5:53 am #

    Within the text it is said: “For example, we can design and understand small filters, such as line detectors.” Do you have any tutorials on your website about designing special-purpose filters and their applications? I appreciate if you share the link.

  36. Avatar
    Arjun Haridas February 29, 2020 at 3:58 am #

    Hi Jason,

    Great article,

    I am using a 3d kernel of size 3x3x3 in my conv layer and would like to get similar weight visualization plots.

    Since plotting in 3d is not possible i tried to split the kernels into 3 3×3 for plotting.
    Is this approach correct?

    The conv layer consists of 5 layers #model.add(layers.Conv3D(5, (3, 3, 3), padding=’same’))

    Please find below the code I used to plot the weights adapted from your code and do let me know if this approach is correct or is there any better method..

    from keras.models import load_model
    mymodel = load_model(‘model.hdf5′)

    from matplotlib import pyplot as plt
    # load the model

    # retrieve weights from the 1st conv layer layer
    filters, biases = mymodel.layers[0].get_weights()
    # normalize filter values to 0-1 so we can visualize them
    f_min, f_max = filters.min(), filters.max()
    filters = (filters – f_min) / (f_max – f_min)
    #shape of filters (3, 3, 3, 1, 5)
    n_filters, ix = 5, 1
    for i in range(n_filters):
    # get the filter
    f = filters[:,:, :, :, i]
    f = f[:,:,:,0]
    # kernel shape 3x3x3 but to plot it converting into 3 3×3 filters
    for j in range(3):
    # specify subplot and turn of axis
    ax = plt.subplot(n_filters, 3, ix)
    ax.set_xticks([])
    ax.set_yticks([])
    # plot filter channel in grayscale
    plt.imshow(f[:, :, j], cmap=’gray’)
    ix += 1
    # show the figure
    plt.show()

    Looking forward to your reply

    • Avatar
      Jason Brownlee February 29, 2020 at 7:20 am #

      This is a common question that I answer here:
      https://machinelearningmastery.com/faq/single-faq/can-you-read-review-or-debug-my-code

      • Avatar
        Arjun February 29, 2020 at 9:59 am #

        I am only asking for your suggestion and not to review my code, just want to know whether the method that I used is correct or not or any further suggestions..

        • Avatar
          Jason Brownlee March 1, 2020 at 5:19 am #

          Sorry, I don’t have examples of working with 3d conv nets. In order to figure out if what you have done is reasonable I need to read/review your new code – which I don’t have the capacity to do.

          Perhaps try posting on stackoverflow?

  37. Avatar
    Diana Kim May 7, 2020 at 1:49 am #

    Thank you! This is really helpful.

  38. Avatar
    Raja May 12, 2020 at 1:51 am #

    Dear Sir,
    Thanks for the great article.
    I do not understand why the features of the initial CNN layer are more prominent than the higher layer? For object classification, features of the final layer should be more clear for identification.
    Please clarify.

    • Avatar
      Jason Brownlee May 12, 2020 at 6:48 am #

      The data flowing through the model is most like the original data towards the input end of the model before any pooling or processing has been performed.

  39. Avatar
    fatemeh May 23, 2020 at 2:13 am #

    Hello
    How can I fix this error in this line of code?

    inter_output_model = tf.keras.Model (model.input, model.get_layer (index = 1) .output)

    AttributeError: ‘tuple’ object has no attribute ‘layer’

    And in this error in this line

    from matplotlib import pyplot as plt
    import numpy as np
    # plot all 64 maps in an 8×8 squares
    square = 8
    ix = 1
    for _ in range (square):
    for _ in range (square):
    # specify subplot and turn of axis
    ax = pyplot.subplot (square, square, ix)
    ax.set_xticks ([])
    ax.set_yticks ([])
    # full filter channel in grayscale
        
    pyplot.imshow (feature_maps [0,:,,:, ix-1], cmap = ‘gray’)
    ix + = 1
    # show the figure
    pyplot.show ()

    IndexError: too many indices for array

    Thanks for the tutorial

  40. Avatar
    Pravin May 24, 2020 at 1:36 am #

    Holy moly! I’ve got everything I was looking for in a single article! Great work! Thanks for drafting such a piece.

  41. Avatar
    Mousheng Xu May 30, 2020 at 1:32 pm #

    Great article, Jason!

    One quick question: if I want to further cluster the birds into subtypes (of birds) with unsupervised learning, what would you suggest? I am thinking of using the feature maps as input for unsupervised clustering, then which layers would make sense to you?

    Thanks so much!

  42. Avatar
    Ayeshmanthi June 4, 2020 at 12:52 pm #

    Thank you for the great article Jason!

    I was wondering whether you have any plans on writing an article on “Visualizing and Understanding Convolutional Networks” by Zeiler et al. or a summary of work following that?

    This would be really helpful.

  43. Avatar
    Raz June 17, 2020 at 9:22 am #

    How identify no. Of nodes for dense layer in cnn image classification.
    Realy wana formula to specify it .

  44. Avatar
    Ahmad June 22, 2020 at 5:47 am #

    Great and simple Jason!

    I would like to know if it is possible to visualize the feature from fc1 and f2 despite the dimensionality reduction? If yes, can you please guide in the right direction?

    • Avatar
      Jason Brownlee June 22, 2020 at 6:18 am #

      Not really as there are no feature maps in dense layers.

      • Avatar
        Ahmad June 22, 2020 at 6:58 am #

        That strange! According to limited understanding of fully connected layers is that they consist of patches of features of the object of a particular class which then pass to a prediction layer to make predictions. Is it the wrong interpretation?
        Furthermore, I just went through the link (https://de.mathworks.com/help/deeplearning/ug/visualize-features-of-a-convolutional-neural-network.html), which shows how to Visualize the features of Fully Connected Layer (FC-layers) using deepDreamImage in Matlab. I just wanted to do the same in Keras.

        • Avatar
          Jason Brownlee June 22, 2020 at 1:25 pm #

          Perhaps some of the references in the “further reading” section of the tutorial will help as a starting point.

  45. Avatar
    Aeri June 23, 2020 at 2:25 am #

    Thanks for really helpful article. I read so many your articles. And I have one question for here.

    ‘How can I transfer the functions of a deep neural network?’ thesis or transfer learning
    The first layer comes with general features like lines, edges, and stains, and the last layer has certain features. However, you need to pass the image to this article or VGG16. The first layer is the input image area.
    Am I misunderstanding. Can I ask for an explanation?

    According to your article, the first layer shows a detailed image similar to the input image, and the last layer shows a less detailed blob-like image. While I did the test with VGG16, it was similar to your result.

    But, the thesis “How transferable are features in deep neural networks?” and articles explaining the reason for Transfer Learning said that less detailed images such as lines, edges, and blobs appears in the first layer, and specific features are showed in the last layer.

    So, can you explain why the reason of differences between yours and others?

    • Avatar
      Jason Brownlee June 23, 2020 at 6:29 am #

      Perhaps ask the author of the document you are referencing?

      You can see my code directly and the results.

  46. Avatar
    mukula August 26, 2020 at 1:15 am #

    i am trying to visualize layers using mobilenet(keras). The shape of feature map after model.predict is (1,225,225,3). while plotiing i am getting the following error. can anyone help me with this.

    IndexError Traceback (most recent call last)

    in ()
    11 ax.set_yticks([])
    12 # plot filter channel in grayscale
    —> 13 pyplot.imshow(feature_maps[0, :, :, ix-1], cmap=’gray’)
    14 ix += 1
    15 # show the figure

    IndexError: index 3 is out of bounds for axis 3 with size 3

    • Avatar
      Jason Brownlee August 26, 2020 at 6:51 am #

      Sorry, I’m not sure of the cause of the fault.

      Perhaps try posting your code to stackoverflow?

  47. Avatar
    Alakananda Mitra September 2, 2020 at 8:34 am #

    Hi,

    I was trying to follow your code and apply it to Xception network. But when I tried to retrieve the filters and biases, I got –
    —————————————————————————
    ValueError Traceback (most recent call last)

    1 # retrieve weights from the second hidden layer
    —-> 2 filters, biases = model.layers[1].get_weights()

    ValueError: not enough values to unpack (expected 2, got 1)

    Please help me.
    Thanks
    AM

    • Avatar
      Jason Brownlee September 2, 2020 at 1:29 pm #

      Sorry to hear that, the cause of the fault is not obvious to me, you may need to debug your changes.

      • Avatar
        Alakananda September 3, 2020 at 12:55 am #

        Hi Jason,

        Thank you for your answer. I tried to debug it. I can access each layer filter but for the problem is happening when I try to print all filters shape. I believe, if condition statement will be different for Xcepion. But I am not sure what it will be.

        model = Xception()
        for layer in model.layers:
        # check for convolutional layer
        if ‘conv’ not in layer.name:
        continue
        # get filter weights
        filters, biases = layer.get_weights()
        print(layer.name, filters.shape)

        Error message : ValueError: not enough values to unpack (expected 2, got 1)

        Could you please have a look?
        Thank you.
        Kind regards,
        Alakananda

  48. Avatar
    hadeer helaly September 18, 2020 at 6:12 am #

    please ,how to save the ouput feature maps images in folder

  49. Avatar
    Mohamed Ezz October 10, 2020 at 11:34 pm #

    Thanks a lot for this great article, I am teaching image recognition and data science, and really I learned a lot from thanks again.

    I would like to ask you two questions regarding visualizing feature maps from CNN:
    1- how we can benefits from a visualization for improving model accuracy by either change architecture of CNN or even update filers ( more training)
    2- before ready this article, I expected the output from the first layer recognizes low-level edges, then the next layer to recognizes higher-level edges, till the end recognizes the whole object but surprised by the reversed order. Can you confirm my understanding

    • Avatar
      Jason Brownlee October 11, 2020 at 6:51 am #

      You’re welcome.

      You’re understanding is correct, it is just operating at the scale of the whole image.

  50. Avatar
    Ashish October 28, 2020 at 5:54 pm #

    Hi Jason,
    Wonderful article.
    Can you please let me know how I can get the list of fired neurons in a CNN model when an image is fed to it for a prediction.

    For example a CNN model predicts fruit name if an image of a fruit is fed to it. If I feed an image of an apple, can I get the data of hidden layers like:
    { {Layer-1}, {N1, N20, N24, N55, N100..N150} },
    { {Layer-2}, {N21, N50, N75..N90} }

    here the N represents the neuron number of a given layer that fired to make the prediction. The gap between the neuron numbers indicates the neurons that did not fire.

    Your help is much appreciated.

    • Avatar
      Jason Brownlee October 29, 2020 at 7:57 am #

      The above tutorials is exactly what you describe.

      • Avatar
        Ashish January 4, 2021 at 7:14 pm #

        Hi Jason,
        I guess I phrased my question incorrectly. What I am trying is to figure out the activated neurons of Dense layers and not conv2d layers.
        Meanwhile, I tried to figure it out myself but I am not sure if my understanding is correct or not. Please help me with that.

        I have a CNN model, after all the conv2d and flattening layers, I have 2 hidden dense layers followed by one output dense layer with softmax function.
        I am trying to get info on the activated neurons of those 2 hidden dense layers.
        I have used ReLU as the activation function so I assume that the output of neurons with any positive value indicates an activated state of the neuron whereas zero indicates a deactivated state because of (0, max) formula of ReLU.

        1. Is my understanding correct? does a value of zero mean the deactivated state of that neuron?
        2. I assume that these dense layers represent the learning/intelligence of a model and only a fixed set of neurons will fire for a given image because that’s how the model has learned about that image. Even in my experiment that I did, only 20-30% neurons out of the total neurons in a given layer fired for all the input samples of a given class. These were the same neurons that fired from the given range every time for a given class. Is my understanding correct?

        Thanks in advance.

        • Avatar
          Jason Brownlee January 5, 2021 at 6:21 am #

          The activation of the Dense layers would be a vector output. It would not be visualized as an image directly, perhaps pair-wise scatter plots of a PCA transform, although it would need additional context to be interpreted.

          Zero does not mean deactivated, it means a zero output for a specific input.

          Yes, generally we can think of the Dense layers before the output layer as interpreting the features extracted from the image by the CNN model.

          • Avatar
            Ashish January 5, 2021 at 7:20 pm #

            Thanks for your reply Jason.
            What I am trying to do is to numerically understand the intelligence acquired by the CNN model. I have developed a cnn model that classifies images. The test metrics are all fine which shows that the model has good accuracy and less loss.

            However metrics to me are like a black box, I want to take a deeper look into the model to understand its capabilities.

            I assume that the conv2d and flatten layers do not acquire any sort of intelligence during the model training and they are meant only to dismantle an image into small pieces that can be numerically interpreted by the Dense layers that follow the conv2d layers.

            It is the dense layers that acquire the required intelligence through which the model classifies the images.

            I am fine with not being able to visually test the model, anyway visual testing does not make any sense as the images are sliced to such micro-level by the conv2d layers that there isn’t any point to trying to visualize those images for testing purpose.

            It is here that I thought that knowing the list of fired neurons can be one way of understanding what the dense layers have learned. For example, if I have trained my model to classify between two fruits (sweet lime and orange (orange when it is in green color raw fruit) given this scenario, most of the features will be the same including the color, texture, size, etc. The only difference left out is the minute difference in the shape of both fruits. Looking at it from the dense layer’s point of view, the hidden dense layer will fire almost 70-80% same neurons for both images as most of the features are the same for both and only a small percentage of neurons will be different (firing for one class and not firing for another class based on which the last output layer will calculate the probability).

            But as you said this cannot be done as zero does not mean deactivated neurons. Can you please let me know how else can we test this part or to put it in a better way, how can we better utilize the information available in the dense layers that can give us insight into the capabilities of the model?

            Thanks in advance.

          • Avatar
            Jason Brownlee January 6, 2021 at 6:25 am #

            Not sure I follow. Typically neural networks are not interpretable, e.g. are opaque. This is a general limitation of the method.

  51. Avatar
    Saeed November 18, 2020 at 3:44 am #

    Hi Jason, please I need your help. If I have the following CNN :

    model.add(Conv2D(8, (5, 5), input_shape=(256, 256, 1), padding=’same’, use_bias=False)) model.add(BatchNormalization())
    model.add(Activation(activation=’tanh’))
    model.add (AveragePooling2D (pool_size= (5,5), strides=2))
    model.summary()

    How can add the absolute vale layer after applying the convolution layer(after the first step) and continue with the rest of code. Please this necessary for me.

    • Avatar
      Jason Brownlee November 18, 2020 at 6:46 am #

      Sorry, I don’t understand your question, perhaps you can rephrase or elaborate?

  52. Avatar
    Rico Aditya November 29, 2020 at 1:02 pm #

    Dear sir,

    Very clear explaination and help me for my tesis as reference. Thanks. but i still have any question.

    1. is from Feature Maps Extracted at the Block 5 will be input for VGG-16 classification layer?
    2. can we show output classification layer? if yes, what needs to be added to code?

    Thanks

    • Avatar
      Jason Brownlee November 30, 2020 at 6:34 am #

      Thanks!

      Feature maps output from one block are fed as input to the next block. This is how CNNs work.

      The output layer does not have feature maps, you cannot visualize it in the same manner.

      • Avatar
        rico aditya December 1, 2020 at 2:21 pm #

        So, what a method in VGG-16 arch used to classification?

        • Avatar
          Jason Brownlee December 1, 2020 at 2:44 pm #

          Sorry, I don’t understand, can you please elaborate or rephrase your question?

          • Avatar
            rico aditya December 1, 2020 at 7:48 pm #

            The method used for classification (after feature extraction in CNN layer) in VGG-16 is Neural Network (Fully Connected Layer). Can you confirm my understanding.

          • Avatar
            Jason Brownlee December 2, 2020 at 7:41 am #

            Correct.

          • Avatar
            rico aditya December 10, 2020 at 3:47 pm #

            any reference to explain how to classification with FC from output CNN? thanks.

          • Avatar
            Jason Brownlee December 11, 2020 at 6:30 am #

            Not really, the dense model interprets the features and maps them to a target class.

            What kind of explanation do you require?

          • Avatar
            rico aditya December 11, 2020 at 1:48 pm #

            I need to know,

            1. What features are generated by the deep learning object detection model? In case VGG-16 (whether it’s edge detection, contour detection and etc.)

            2. confirm my understanding. The model will be classify based on feature map (last output on last block cnn). Right?

          • Avatar
            Jason Brownlee December 12, 2020 at 6:20 am #

            VHH-16 is not used for object detection, instead it is an image classification model.

            The model generally works by extracting features from the image – that we cannot interpret – that are then interpreted by the dense layers before a classification is made.

          • Avatar
            rico aditya December 13, 2020 at 6:54 pm #

            Ok Jason, its clear.

            and then.
            How i do enlarge the plot size for show feature maps? I used jupyter notebook.

          • Avatar
            Jason Brownlee December 14, 2020 at 6:16 am #

            I recommend not using a notebook:
            https://machinelearningmastery.com/faq/single-faq/why-dont-use-or-recommend-notebooks

  53. Avatar
    Kerwin December 24, 2020 at 7:21 pm #

    Great article.

  54. Avatar
    KFa January 6, 2021 at 1:01 am #

    Hi,
    Thank you for a great tutorial.
    Do you know what does it man if some of the filter of deeper layers are empty?

    • Avatar
      Jason Brownlee January 6, 2021 at 6:29 am #

      Not really, it is hard to interpret the meaning of the filters based on their activations. We can only guess, or perhaps explore by disabling some during prediction and observe the effect.

  55. Avatar
    Atefeh January 31, 2021 at 4:46 pm #

    Hello Mr. Brownlee
    thank you for your grate posts.

    I want to extract features from some images in size 80*70 pixels with a CNN architecture.
    also i want a small feature vector.
    before i used a code that it use VGG-16 to feature extraction but i have problem with it because
    first it use 224*224 images as input and
    second the feature vector had 4096 elements.
    would you please help me and guide me to a code that a simple cnn architecture use for feature extraction.

    thank you

    • Avatar
      Jason Brownlee February 1, 2021 at 6:24 am #

      Perhaps you can add a global pooling layer or two to the end of the model to reduce the dimensionality of the output vector.

      Or, perhaps you can use a PCA or SVD to reduce the encoded vectors?

      Or, perhaps you can add a new smaller layer to the end model, re-fit the model?

      Or, perhaps you can use an alternate model with a smaller encoded vector?

      I hope that gives you some ideas.

  56. Avatar
    Oner February 20, 2021 at 9:20 am #

    Great Artcile.
    Sir, I would ask you how can I try a custom filter in conv2D? I created a 3×3 filter but when I tryed to simulate it I got this erro message : ValueError: The initial value’s shape ((3, 3)) is not compatible with the explicitly supplied shape argument ((3, 3, 3, 16)).

    thank in advance.

  57. Avatar
    Shobi March 23, 2021 at 10:21 am #

    Hi Jason,

    Thank you so much for your very important article. Would it be ok to use preprocess_input as preprocessing before training model?

    Thank you!

  58. Avatar
    Sanwal Hayat April 5, 2021 at 6:22 pm #

    When I run following code:

    # redefine model to output right after the first hidden layer
    model = Model(inputs=model.inputs, outputs=model.layers[1].output)
    —————————————————————————
    I face TypeError: call() got an unexpected keyword argument ‘outputs’
    Kindly guide me how to solve it.

    TypeError Traceback (most recent call last)

    in ()
    1 # redefine model to output right after the first hidden layer
    —-> 2 model = Model(inputs=model.inputs, outputs=model.layers[1].output)

    3 frames

    /usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/base_layer.py in _infer_output_signature(self, inputs, args, kwargs, input_masks)
    861 # TODO(kaftan): do we maybe_build here, or have we already done it?
    862 self._maybe_build(inputs)
    –> 863 outputs = call_fn(inputs, *args, **kwargs)
    864
    865 self._handle_activity_regularization(inputs, outputs)

    TypeError: call() got an unexpected keyword argument ‘outputs’

  59. Avatar
    Aria April 13, 2021 at 6:04 pm #

    Hi Jason,

    I really like your posts and have been following them closely. However, I was curious if you had a post about how to visualize filters and feature maps in 1D CNNs, especially for EEG. Is this possible and can we get something out of it by looking at the 1D filters? I want to see what kind of preprocessing my filters are doing etc.

    Thanks in advance.

    • Avatar
      Jason Brownlee April 14, 2021 at 6:23 am #

      Thanks.

      Sorry, I don’t have an example of visualizing maps for 1d CNNs. Perhaps you can adapt the above examples.

  60. Avatar
    Loosgagnet April 25, 2021 at 5:26 am #

    Hi,
    To my knowledge, in deep models, higher-level features are derived from lower-level features to form a hierarchical representation. Why is it inverse in your example (bird)? The Feature Maps Extracted From Block 1 (the shape of the bird) should belong to the deeper convolution layer. Am I right?
    In deep learning, convolutional layers are exceptionally good at finding good features in images to the next layer to form a hierarchy of nonlinear features that grow in complexity (e.g. blobs, edges -> noses, eyes, cheeks -> faces).
    Could you please explain it more?
    Thanks.

    • Avatar
      Jason Brownlee April 26, 2021 at 5:32 am #

      Yes, that is what we are seeing here. Although we see the effect across the whole image.

      We lose detail as we go deeper given pooling layers.

  61. Avatar
    Abhishek Maheshwari September 21, 2021 at 2:30 am #

    Thank you so much for the codes. It is really helpful!
    In your last code, I am getting this error:

    ValueError: Input 0 of layer conv2d_1 is incompatible with the layer: expected axis -1 of input shape to have value 1 but received input with shape (None, 200, 200, 3)

    could you please suggest?
    thanks again.
    Regards,

    • Adrian Tam
      Adrian Tam September 21, 2021 at 9:38 am #

      I tried to run the code but didn’t see the error. It is strange to me too, because what the error message essentially means is that the input to VGG16 model is expecting a grayscale image (which should not!) while I am providing a color image.

  62. Avatar
    T. Edwald October 8, 2021 at 9:12 pm #

    Absolutely, wonderfully clear and incredibly helpful. Thank you. Very much.

    I’d order your book right now, except I hesitate as I note this posting is written in July 2019
    and would ask whether your “new book” (as of July 2019) is still up-to-date, and/or are
    there updates, revisions?
    This business moves very fast, and books quickly become obsolete, as you know.
    (This may have been one reason why you publish it as an eBook. )
    That said, as I am working with ResNet50 implemented in Keras, your text from 2019 is just as relevant now as it was then.
    I can almost self-recommend to simply buy the book and see; at this price, what can I lose?
    But, I’d still appreciate your opinion on this, if you are still following the thread.
    Many thanks for your great explanations above.

    • Adrian Tam
      Adrian Tam October 13, 2021 at 5:30 am #

      Thanks. The code on this blog (as well as the books) would be updated as long as it is found to be obsoleted. But please give us some time as there are a lot out there.

  63. Avatar
    T. Edwald October 8, 2021 at 10:58 pm #

    (I just bought the book anyway. The above example code was easily worth it. Thanks.)

  64. Avatar
    Kalpesh Patil February 15, 2022 at 1:59 am #

    Very wondeful explanation of the feature maps.
    Perhaps, I have different question in the same context. How to understand the activation maps from tanh activation function? Which valies will be ignored and which will important?

    Also, how one can decipher meaning from GradCAM based heatmaps from tanh?

    For example, In case of Relu, blue is ignored and Orange is important. But how does convey a meaning in case of tanh GradCAM heatmaps.

    Thank for reading!

    Can you do a tutorial and explanation for GradCAM based or other important methods of heatmaps?

  65. Avatar
    Deniss May 26, 2022 at 9:11 pm #

    Doesn’t this solution breaking the original model? Doesn’t new model skip all pooling layers (MaxPooling2D) ?

    • Avatar
      James Carmichael May 27, 2022 at 9:27 am #

      Hi Deniss…Please clarify or rephrase your question so that we may better assist you.

      • Avatar
        Deniss May 27, 2022 at 4:16 pm #

        I mean, doesn’t this solution in this article is skipping all MaxPooling2D layers?
        It looks like author is only executing convolution layers.
        Maybe i have misunderstood the logic behind that solution.

  66. Avatar
    vikas June 12, 2022 at 1:37 am #

    sorry just got this error
    can anybody help

  67. Avatar
    vikas June 12, 2022 at 1:39 am #

    pyplot.imshow(fmap[0,:,:,ix-1],cmap=’gray’)
    too many indices for array: array is 1-dimensional, but 4 were indexed

    • Avatar
      James Carmichael June 12, 2022 at 9:29 am #

      Hi Vikas…What code listing in particular are you attempting to execute?

  68. Avatar
    vikas June 12, 2022 at 1:49 am #

    resolved

    • Avatar
      James Carmichael June 12, 2022 at 9:25 am #

      Thank you for the feedback!

    • Avatar
      fff September 19, 2022 at 12:26 am #

      how did you resolve? I’ve the same error. Thank you

  69. Avatar
    Pumbles August 21, 2022 at 9:16 am #

    Thank you for the helpful tutorial!

    I understand why all filters cannot be viewed simultaneously, but I was wondering how would I be able to visualise the last few filters in the model, as the script you made iterates through the first few.

    Thanks 🙂

  70. Avatar
    Pumbles August 21, 2022 at 9:28 am #

    Nevermind, I was able to figure it out 🙂

    • Avatar
      James Carmichael August 22, 2022 at 9:01 am #

      Keep up the great work Pumbles!

  71. Avatar
    Silvia September 19, 2022 at 12:31 am #

    Hello! I have this error
    pyplot.imshow(fmap[0,:,:,ix-1],cmap=’gray’)
    too many indices for array: array is 1-dimensional, but 4 were indexed
    How to resolve it? Thank you

  72. Avatar
    Maaz Jamshaid March 27, 2023 at 6:59 am #

    Hi, great tutorial. Just wanna ask that if I want to display one feature map of the image. Like just one figure, is it possible? or do I have to first display all and then choose the one that highlights the feature in the best way and then display it separately?

  73. Avatar
    curious May 31, 2023 at 10:48 pm #

    Hi, great tutorial. I am a novice and don’t mind if the ques sounds stupid…Just wanna ask that we are visualizing the filters in grey scale, can we also visualize them in specific R,G,B channels?

  74. Avatar
    curious June 2, 2023 at 3:52 pm #

    thank you for the link…I had already gone through this article…but just wanted to clarify that in your code, where you have used ‘cmap=grey’, can we use other RGB channels here only like cmap=red?

  75. Avatar
    Faezeh July 31, 2023 at 9:39 pm #

    Hi,
    You used channels_last to create Your own CNN:

    Layer (type) Output Shape Param #
    =================================================================
    input_1 (InputLayer) (None, 224, 224, 3) 0
    _________________________________________________________________
    block1_conv1 (Conv2D) (None, 224, 224, 64) 1792

    But you feed this network with an image that its data_format is channels_first :

    # expand dimensions so that it represents a single ‘sample’
    img = expand_dims(img, axis=0)

    Why did you do this?

  76. Avatar
    Jessica November 3, 2023 at 9:48 pm #

    Hi,
    I tried the visualization with my neural network and it works but some of the feature maps are completely black…

    Do you know that that me? If a feature map is black?

    Thanks 🙂

  77. Avatar
    Jessica November 3, 2023 at 10:31 pm #

    Hi,

    i have tried the visualization with my neural network and it works, but some of the feature maps are completely black….

    Do you know what it means that some maps are black?

    Thanks 🙂

    • Avatar
      James Carmichael November 4, 2023 at 8:06 am #

      Hi Jessica…What IDE are you using (Anaconda, Spyder, Google Colab…?)

  78. Avatar
    Jessica November 6, 2023 at 7:31 pm #

    I am using PyCharm

Leave a Reply