Using Depthwise Separable Convolutions in Tensorflow

Looking at all of the very large convolutional neural networks such as ResNets, VGGs, and the like, it begs the question on how we can make all of these networks smaller with less parameters while still maintaining the same level of accuracy or even improving generalization of the model using a smaller amount of parameters. One approach is depthwise separable convolutions, also known by separable convolutions in TensorFlow and Pytorch (not to be confused with spatially separable convolutions which are also referred to as separable convolutions). Depthwise separable convolutions were introduced by Sifre in “Rigid-motion scattering for image classification” and has been adopted by popular model architectures such as MobileNet and a similar version in Xception. It splits the channel and spatial convolutions that are usually combined together in normal convolutional layers

In this tutorial, we’ll be looking at what depthwise separable convolutions are and how we can use them to speed up our convolutional neural network image models.

After completing this tutorial, you will learn:

  • What is a depthwise, pointwise, and depthwise separable convolution
  • How to implement depthwise separable convolutions in Tensorflow
  • Using them as part of our computer vision models

Let’s get started!

Using Depthwise Separable Convolutions in Tensorflow
Photo by Arisa Chattasa. Some rights reserved.

Overview

This tutorial is split into 3 parts:

  • What is a depthwise separable convolution
  • Why are they useful
  • Using depthwise separable convolutions in computer vision model

What is a Depthwise Separable Convolution

Before diving into depthwise and depthwise separable convolutions, it might be helpful to have a quick recap on convolutions. Convolutions in image processing is a process of applying a kernel over volume, where we do a weighted sum of the pixels with the weights as the values of the kernels. Visually as follows:

Applying a 3×3 kernel on a 10x10x3 outputs an 8x8x1 volume

Now, let’s introduce a depthwise convolution. A depthwise convolution is basically a convolution along only one spatial dimension of the image. Visually, this is what a single depthwise convolutional filter would look like and do:

Applying a depthwise 3x3 kernel on the green channel in this example

The key difference between a normal convolutional layer and a depthwise convolution is that the depthwise convolution applies the convolution along only one spatial dimension (i.e. channel) while a normal convolution is applied across all spatial dimensions/channels at each step.

If we look at what an entire depthwise layer does on all RGB channels,

Applying a depthwise convolutional filter on 10x10x3 input volume outputs 8x8x3 volume

Notice that since we are applying one convolutional filter for each output channel, the number of output channels is equal to the number of input channels. After applying this depthwise convolutional layer, we then apply a pointwise convolutional layer.

Simply put a pointwise convolutional layer is a regular convolutional layer with a 1x1 kernel (hence looking at a single point across all the channels). Visually, it looks like this:

Applying a pointwise convolution on a 10x10x3 input volume outputs a 10x10x1 output volume

Why are Depthwise Separable Convolutions Useful?

Now, you might be wondering, what’s the use of doing two operations with the depthwise separable convolutions? Given that the title of this article is to speed up computer vision models, how does doing two operations instead of one help to speed things up?

To answer that question, let’s look at the number of parameters in the model (there would be some additional overhead associated with doing two convolutions instead of one though). Let’s say we wanted to apply 64 convolutional filters to our RGB image to have 64 channels in our output. Number of parameters in normal convolutional layer (including bias term) is $ 3 \times 3 \times 3 \times 64 + 64 = 1792$. On the other hand, using a depthwise separable convolutional layer would only have $(3 \times 3 \times 1 \times 3 + 3) + (1 \times 1 \times 3 \times 64 + 64) = 30 + 256 = 286$  parameters, which is a significant reduction, with depthwise separable convolutions having less than 6 times the parameters of the normal convolution.

This can help to reduce the number of computations and parameters, which reduces training/inference time and can help to regularize our model respectively.

Let’s see this in action. For our inputs, let’s use the CIFAR10 image dataset of 32x32x3 images,

Then, we implement a depthwise separable convolution layer. There’s an implementation in Tensorflow but we’ll go into that in the final example.

Constructing a model with using a depthwise separable convolutional layer and looking at the number of parameters,

which gives the output

which we can compare with a similar model using a regular 2D convolutional layer,

which gives the output

That corroborates with our initial calculations on the number of parameters done earlier and shows the reduction in number of parameters that can be achieved by using depthwise separable convolutions.

More specifically, let’s look at the number and size of kernels in a normal convolutional layer and a depthwise separable one. When looking at a regular 2D convolutional layer with $c$ channels as inputs, $w \times h$ kernel spatial resolution, and $n$ channels as output, we would need to have $(n, w, h, c)$ parameters, that is $n$ filters, with each filter having a kernel size of $(w, h, c)$. However, this is different for a similar depthwise separable convolution even with the same number of input channels, kernel spatial resolution, and output channels. First, there’s the depthwise convolution which involves $c$ filters, each with a kernel size of $(w, h, 1)$ which outputs $c$ channels since it acts on each filter. This depthwise convolutional layer has $(c, w, h, 1)$ parameters (plus some bias units). Then comes the pointwise convolution which takes in the $c$ channels from the depthwise layer, and outputs $n$ channels, so we have $n$ filters each with a kernel size of $(1, 1, n)$. This pointwise convolutional layer has $(n, 1, 1, n)$ parameters (plus some bias units).

You might be thinking right now, but why do they work?

One way of thinking about it, from the Xception paper by Chollet is that depthwise separable convolutions have the assumption that we can separately map cross-channel and spatial correlations. Given this, there will be bunch of redundant weights in the convolutional layer which we can reduce by separating the convolution into two convolutions of the depthwise and pointwise component. One way of thinking about it for those familiar with linear algebra is how we are able to decompose a matrix into outer product of two vectors when the column vectors in the matrix are multiples of each other.

Using Depthwise Separable Convolutions in Computer Vision Models

Now that we’ve seen the reduction in parameters that we can achieve by using a depthwise separable convolution over a normal convolutional filter, let’s see how we can use it in practice with Tensorflow’s SeparableConv2D filter.

For this example, we will be using the CIFAR-10 image dataset used in the above example, while for the model we will be using a model built off VGG blocks. The potential of depthwise separable convolutions is in deeper models where the regularization effect is more beneficial to the model and the reduction in parameters is more obvious as opposed to a lighter weight model such as LeNet-5.

Creating our model using VGG blocks using normal convolutional layers,

Then we look at the results of this 6-layer convolutional neural network with normal convolutional layers,

Let’s try out the same architecture but replace the normal convolutional layers with Keras’ SeparableConv2D layers instead:

Running the above code gives us the result:

Notice that there are significantly less parameters in the depthwise separable convolution version (~200k vs ~1.2m parameters), along with a slightly lower train time per epoch. Depthwise separable convolutions is more likely to work better on deeper models that might face an overfitting problem and on layers with larger kernels since there is a greater decrease in parameters and computations that would offset the additional computational cost of doing two convolutions instead of one. Next, we plot the train and validation and accuracy of the two models, to see differences in the training performance of the models:

 

Training and validation accuracy of network with normal convolutional layers

Training and validation accuracy of network with depthwise separable convolutional layers

The highest validation accuracy is similar for both models, but the depthwise separable convolution appears to have less overfitting to the train set, which might help it generalize better to new data.

Combining all the code together for the depthwise separable convolutions version of the model,

Further Reading

This section provides more resources on the topic if you are looking to go deeper.

Papers:

APIs:

Summary

In this post, you’ve seen what are depthwise, pointwise, and depthwise separable convolutions. You’ve also seen how using depthwise separable convolutions allows us to get competitive results while using a significantly smaller number of parameters.

Specifically, you’ve learnt:

  • What is a depthwise, pointwise, and depthwise separable convolution
  • How to implement depthwise separable convolutions in Tensorflow
  • Using them as part of our computer vision models

 

Develop Deep Learning Models for Vision Today!

Deep Learning for Computer Vision

Develop Your Own Vision Models in Minutes

...with just a few lines of python code

Discover how in my new Ebook:
Deep Learning for Computer Vision

It provides self-study tutorials on topics like:
classification, object detection (yolo and rcnn), face recognition (vggface and facenet), data preparation and much more...

Finally Bring Deep Learning to your Vision Projects

Skip the Academics. Just Results.

See What's Inside

4 Responses to Using Depthwise Separable Convolutions in Tensorflow

  1. Avatar
    George July 12, 2023 at 2:04 am #

    Very useful article!
    In your figures, it’s probably “Accuracy” in your y-axis, rather than “Loss”.

  2. Avatar
    Deepak February 20, 2024 at 5:40 pm #

    would be great if you can add latest blog option here so that we can short recent or latest article accordingly.

    • Avatar
      James Carmichael February 21, 2024 at 9:51 am #

      Hi Deepak…what is meant by “short recent or latest article”?

Leave a Reply