Using Dropout Regularization in PyTorch Models

Dropout is a simple and powerful regularization technique for neural networks and deep learning models.

In this post, you will discover the Dropout regularization technique and how to apply it to your models in PyTorch models.

After reading this post, you will know:

  • How the Dropout regularization technique works
  • How to use Dropout on your input layers
  • How to use Dropout on your hidden layers
  • How to tune the dropout level on your problem

Kick-start your project with my book Deep Learning with PyTorch. It provides self-study tutorials with working code.


Let’s get started.

Using Dropout Regularization in PyTorch Models
Photo by Priscilla Fraire. Some rights reserved.

Overview

This post is divided into six parts; they are

  • Dropout Regularization for Neural Networks
  • Dropout Regularization in PyTorch
  • Using Dropout on the Input Layer
  • Using Dropout on the Hidden Layers
  • Dropout in Evaluation Mode
  • Tips for Using Dropout

Dropout Regularization for Neural Networks

Dropout is a regularization technique for neural network models proposed around 2012 to 2014. It is a layer in the neural network. During training of a neural network model, it will take the output from its previous layer, randomly select some of the neurons and zero them out before passing to the next layer, effectively ignored them. This means that their contribution to the activation of downstream neurons is temporally removed on the forward pass, and any weight updates are not applied to the neuron on the backward pass.

When the model is used for inference, dropout layer is just to scale all the neurons constantly to compensate the effect of dropping out during training.

Dropout is destructive but surprisingly can improve the model’s accuracy. As a neural network learns, neuron weights settle into their context within the network. Weights of neurons are tuned for specific features, providing some specialization. Neighboring neurons come to rely on this specialization, which, if taken too far, can result in a fragile model too specialized for the training data. This reliance on context for a neuron during training is referred to as complex co-adaptations.

You can imagine that if neurons are randomly dropped out of the network during training, other neurons will have to step in and handle the representation required to make predictions for the missing neurons. This is believed to result in multiple independent internal representations being learned by the network.

The effect is that the network becomes less sensitive to the specific weights of neurons. This, in turn, results in a network capable of better generalization and less likely to overfit the training data.

Want to Get Started With Deep Learning with PyTorch?

Take my free email crash course now (with sample code).

Click to sign-up and also get a free PDF Ebook version of the course.

Dropout Regularization in PyTorch

You do not need to randomly select elements from a PyTorch tensor to implement dropout manually. The nn.Dropout() layer from PyTorch can be introduced into your model. It is implemented by randomly selecting nodes to be dropped out with a given probability $p$ (e.g., 20%) while in the training loop. In PyTorch, the dropout layer further scale the resulting tensor by a factor of $\dfrac{1}{1-p}$ so the average tensor value is maintained. Thanks to this scaling, the dropout layer operates at inference will be an identify function (i.e., no effect, simply copy over the input tensor as output tensor). You should make sure to turn the model into inference mode when evaluating the the model.

Let’s see how to use nn.Dropout() in a PyTorch model.

The examples will use the Sonar dataset. This is a binary classification problem that aims to correctly identify rocks and mock-mines from sonar chirp returns. It is a good test dataset for neural networks because all the input values are numerical and have the same scale.

The dataset can be downloaded from the UCI Machine Learning repository. You can place the sonar dataset in your current working directory with the file name sonar.csv.

You will evaluate the developed models using scikit-learn with 10-fold cross validation in order to tease out differences in the results better.

There are 60 input values and a single output value. The input values are standardized before being used in the network. The baseline neural network model has two hidden layers, the first with 60 units and the second with 30. Stochastic gradient descent is used to train the model with a relatively low learning rate and momentum.

The full baseline model is listed below:

Running the example generates an estimated classification accuracy of 82%.

Using Dropout on the Input Layer

Dropout can be applied to input neurons called the visible layer.

In the example below, a new Dropout layer between the input and the first hidden layer was added. The dropout rate is set to 20%, meaning one in five inputs will be randomly excluded from each update cycle.

Continuing from the baseline example above, the code below exercises the same network with input dropout:

Running the example provides a slight drop in classification accuracy, at least on a single test run.

Using Dropout on Hidden Layers

Dropout can be applied to hidden neurons in the body of your network model. This is more common.

In the example below, Dropout is applied between the two hidden layers and between the last hidden layer and the output layer. Again a dropout rate of 20% is used:

You can see that in this case, adding dropout layer improved the accuracy a bit.

Dropout in Evaluation Mode

Dropout will randomly reset some of the input to zero. If you wonder what happens after you have finished training, the answer is nothing! The PyTorch dropout layer should run like an identity function when the model is in evaluation mode. That’s why you have model.eval() before you evaluate the model. This is important because the goal of dropout layer is to make sure the network learn enough clues about the input for the prediction, rather than depend on a rare phenomenon in the data. But on inference, you should provide as much information as possible to the model.

Tips for Using Dropout

The original paper on Dropout provides experimental results on a suite of standard machine learning problems. As a result, they provide a number of useful heuristics to consider when using Dropout in practice.

  • Generally, use a small dropout value of 20%-50% of neurons, with 20% providing a good starting point. A probability too low has minimal effect, and a value too high results in under-learning by the network.
  • Use a larger network. You are likely to get better performance when Dropout is used on a larger network, giving the model more of an opportunity to learn independent representations.
  • Use Dropout on incoming (visible) as well as hidden units. Application of Dropout at each layer of the network has shown good results.
  • Use a large learning rate with decay and a large momentum. Increase your learning rate by a factor of 10 to 100 and use a high momentum value of 0.9 or 0.99.
  • Constrain the size of network weights. A large learning rate can result in very large network weights. Imposing a constraint on the size of network weights, such as max-norm regularization, with a size of 4 or 5 has been shown to improve results.

Further Readings

Below are resources you can use to learn more about Dropout in neural networks and deep learning models.

Papers

Online materials

Summary

In this post, you discovered the Dropout regularization technique for deep learning models. You learned:

  • What Dropout is and how it works
  • How you can use Dropout on your own deep learning models.
  • Tips for getting the best results from Dropout on your own models.

Get Started on Deep Learning with PyTorch!

Deep Learning with PyTorch

Learn how to build deep learning models

...using the newly released PyTorch 2.0 library

Discover how in my new Ebook:
Deep Learning with PyTorch

It provides self-study tutorials with hundreds of working code to turn you from a novice to expert. It equips you with
tensor operation, training, evaluation, hyperparameter optimization, and much more...

Kick-start your deep learning journey with hands-on exercises


See What's Inside

4 Responses to Using Dropout Regularization in PyTorch Models

  1. Avatar
    Ante February 21, 2023 at 4:44 pm #

    Thanks, great tutorial.

    I am curious how one can use dropout in the INFERENCE stage. Any idea?
    The reason for this dropout would be to effectively train only a SINGLE model, but at the same time you wish to have an ENSEMBLE of models (by having different neurons dropped) and use their predictions to estimate the uncertainty of the model predictions.

    • Adrian Tam
      Adrian Tam March 15, 2023 at 5:44 am #

      It is not usually done but if you insist, PyTorch has the model.eval() and model.train() to switch between training and inference mode. For many layers they are just the same but drop out layer will toggle between random drop or no drop.

  2. Avatar
    Ganesh March 8, 2023 at 6:38 pm #

    Thanks a great article. Can you please share how KFoldStratified would be run if we use the dataloader and Datasets?

Leave a Reply