[New Book] Click to get The Beginner's Guide to Data Science!
Use the offer code 20offearlybird to get 20% off. Hurry, sale ends soon!

Using Activation Functions in Neural Networks

Activation functions play an integral role in neural networks by introducing nonlinearity. This nonlinearity allows neural networks to develop complex representations and functions based on the inputs that would not be possible with a simple linear regression model.

Many different nonlinear activation functions have been proposed throughout the history of neural networks. In this post, you will explore three popular ones: sigmoid, tanh, and ReLU.

After reading this article, you will learn:

  • Why nonlinearity is important in a neural network
  • How different activation functions can contribute to the vanishing gradient problem
  • Sigmoid, tanh, and ReLU activation functions
  • How to use different activation functions in your TensorFlow model

Let’s get started.

Using activation functions in TensorFlow
Photo by Victor Freitas. Some rights reserved.

Overview

This article is split into five sections; they are:

  • Why do we need nonlinear activation functions
  • Sigmoid function and vanishing gradient
  • Hyperbolic tangent function
  • Rectified Linear Unit (ReLU)
  • Using the activation functions in practice

Why Do We Need Nonlinear Activation Functions

You might be wondering, why all this hype about nonlinear activation functions? Or why can’t we just use an identity function after the weighted linear combination of activations from the previous layer? Using multiple linear layers is basically the same as using a single linear layer. This can be seen through a simple example.

Let’s say you have a one hidden layer neural network, each with two hidden neurons.

Single hidden layer neural network with linear layers

You can then rewrite the output layer as a linear combination of the original input variable if you used a linear hidden layer. If you had more neurons and weights, the equation would be a lot longer with more nesting and more multiplications between successive layer weights. However, the idea remains the same: You can represent the entire network as a single linear layer.

To make the network represent more complex functions, you would need nonlinear activation functions. Let’s start with a popular example, the sigmoid function.

Sigmoid Function and Vanishing Gradient

The sigmoid activation function is a popular choice for the nonlinear activation function for neural networks. One reason it’s popular is that it has output values between 0 and 1, which mimic probability values. Hence it is used to convert the real-valued output of a linear layer to a probability, which can be used as a probability output. This also makes it an important part of logistic regression methods, which can be used directly for binary classification.

The sigmoid function is commonly represented by $\sigma$ and has the form $\sigma = \frac{1}{1 + e^{-1}}$. In TensorFlow, you can call the sigmoid function from the Keras library as follows:

This gives the following output:

You can also plot the sigmoid function as a function of $x$,

Sigmoid activation function

When looking at the activation function for the neurons in a neural network, you should also be interested in its derivative due to backpropagation and the chain rule, which would affect how the neural network learns from data.

Sigmoid activation function (blue) and gradient (orange)

Here, you can observe that the gradient of the sigmoid function is always between 0 and 0.25. And as the $x$ tends to positive or negative infinity, the gradient tends to zero. This could contribute to the vanishing gradient problem, meaning when the inputs are at some large magnitude of $x$ (e.g., due to the output from earlier layers), the gradient is too small to initiate the correction.

Vanishing gradient is a problem because the chain rule is used in backpropagation in deep neural networks. Recall that in neural networks, the gradient (of the loss function) at each layer is the gradient at its subsequent layer multiplied by the gradient of its activation function. As there are many layers in the network, if the gradient of the activation functions is less than 1, the gradient at some layer far away from the output will be close to zero. And any layer with a gradient close to zero will stop the gradient propagation further back to the earlier layers.

Since the sigmoid function is always less than 1, a network with more layers would exacerbate the vanishing gradient problem. Furthermore, there is a saturation region where the gradient of the sigmoid tends to 0, which is where the magnitude of $x$ is large. So, if the output of the weighted sum of activations from previous layers is large, then you would have a very small gradient propagating through this neuron as the derivative of the activation $a$ with respect to the input to the activation function would be small (in the saturation region).

Granted, there is also the derivative of the linear term with respect to the previous layer’s activations which might be greater than 1 for the layer since the weight might be large, and it’s a sum of derivatives from the different neurons. However, it might still raise concern at the start of training as weights are usually initialized to be small.

Hyperbolic Tangent Function

Another activation function to consider is the tanh activation function, also known as the hyperbolic tangent function. It has a larger range of output values compared to the sigmoid function and a larger maximum gradient. The tanh function is a hyperbolic analog to the normal tangent function for circles that most people are familiar with.

Plotting out the tanh function:

Tanh activation function

Let’s look at the gradient as well:

Tanh activation function (blue) and gradient (orange)

Notice that the gradient now has a maximum value of 1, compared to the sigmoid function, where the largest gradient value is 0. This makes a network with tanh activation less susceptible to the vanishing gradient problem. However, the tanh function also has a saturation region, where the value of the gradient tends toward as the magnitude of the input $x$ gets larger.

In TensorFlow, you can implement the tanh activation on a tensor using the tanh function in Keras’s activations module:

This gives the output:

Rectified Linear Unit (ReLU)

The last activation function to cover in detail is the Rectified Linear Unit, also popularly known as ReLU. It has become popular recently due to its relatively simple computation. This helps to speed up neural networks and seems to get empirically good performance, which makes it a good starting choice for the activation function.

The ReLU function is a simple $\max(0, x)$ function, which can also be thought of as a piecewise function with all inputs less than 0 mapping to 0 and all inputs greater than or equal to 0 mapping back to themselves (i.e., identity function). Graphically,

ReLU activation function

Next up, you can also look at the gradient of the ReLU function:

ReLU activation function (blue line) and gradient (orange)

Notice that the gradient of ReLU is 1 whenever the input is positive, which helps address the vanishing gradient problem. However, whenever the input is negative, the gradient is 0. This can cause another problem, the dead neuron/dying ReLU problem, which is an issue if a neuron is persistently inactivated.

In this case, the neuron can never learn, and its weights are never updated due to the chain rule as it has a 0 gradient as one of its terms. If this happens for all data in your dataset, then it can be very difficult for this neuron to learn from your dataset unless the activations in the previous layer change such that the neuron is no longer “dead.”

To use the ReLU activation in TensorFlow:

This gives the following output:

The three activation functions reviewed above show that they are all monotonically increasing functions. This is required; otherwise, you cannot apply the gradient descent algorithm.

Now that you’ve explored some common activation functions and how to use them in TensorFlow. Let’s take a look at how you can use these in practice in an actual model.

Using Activation Functions in Practice

Before exploring the use of activation functions in practice, let’s look at another common way to use activation functions when combining them with another Keras layer. Let’s say you want to add a ReLU activation on top of a Dense layer. One way you can do this following the above methods shown is to do:

However, for many Keras layers, you can also use a more compact representation to add the activation on top of the layer:

Using this more compact representation, let’s build our LeNet5 model using Keras:

And running this code gives the following output:

And that’s how you can use different activation functions in your TensorFlow models!

Further Reading

Other examples of activation functions:

Summary

In this post, you have seen why activation functions are important to allow for the complex neural networks that are common in deep learning today. You have also seen some popular activation functions, their derivatives, and how to integrate them into your TensorFlow models.

Specifically, you learned:

  • Why nonlinearity is important in a neural network
  • How different activation functions can contribute to the vanishing gradient problem
  • Sigmoid, tanh, and ReLU activation functions
  • How to use different activation functions in your TensorFlow model

5 Responses to Using Activation Functions in Neural Networks

  1. Avatar
    Razvan July 5, 2022 at 11:12 pm #

    I am so happy to see that we are back to writing code on masterymachinelearning 🙂

    • Avatar
      James Carmichael July 6, 2022 at 3:17 am #

      Thank you Razvan for your feedback!

  2. Avatar
    s kARTHIK July 15, 2022 at 9:14 pm #

    tHANKS

    • Avatar
      James Carmichael July 16, 2022 at 7:08 am #

      You are very welcome s kARTHIK!

  3. Avatar
    Harald Flesche December 10, 2022 at 5:36 pm #

    In the example with the tanh activation function – let’s say you wish to use it at the output layer and then modify the output range. You can surely write your own activation function, but I’m puzzled as to how you would provide the limits when you build your model as parameters and not hardcode them.

Leave a Reply