Adding A Custom Attention Layer To Recurrent Neural Network In Keras

Last Updated on October 12, 2021

Deep learning networks have gained immense popularity in the past few years. The ‘attention mechanism’ is integrated with the deep learning networks to improve their performance. Adding attention component to the network has shown significant improvement in tasks such as machine translation, image recognition, text summarization and similar applications.

This tutorial shows how to add a custom attention layer to a network built using a recurrent neural network. We’ll illustrate an end to end application of time series forecasting using a very simple dataset. The tutorial is designed for anyone looking for a basic understanding of how to add user defined layers to a deep learning network and use this simple example to build more complex applications.

After completing this tutorial, you will know:

  • Which methods are required to create a custom attention layer in Keras
  • How to incorporate the new layer in a network built with SimpleRNN

Let’s get started.

Adding A Custom Attention Layer To Recurrent Neural Network In Keras <br> Photo by

Adding A Custom Attention Layer To Recurrent Neural Network In Keras
Photo by Yahya Ehsan, some rights reserved.

Tutorial Overview

This tutorial is divided into three parts; they are:

  • Preparing a simple dataset for time series forecasting
  • How to use a network built via SimpleRNN for time series forecasting
  • Adding a custom attention layer to the SimpleRNN network

Prerequisites

It is assumed that you are familiar with the following topics. You can click the links below for an overview.

The Dataset

The focus of this article is to gain a basic understanding of how to build a custom attention layer to a deep learning network. For this purpose, we’ll use a very simple example of a Fibonacci sequence, where one number is constructed from previous two numbers. The first 10 numbers of the sequence are shown below:

0, 1, 1, 2, 3, 5, 8, 13, 21, 34, …

When given the previous ‘t’ numbers, can we get a machine to accurately reconstruct the next number? This would mean discarding all the previous inputs except the last two and performing the correct operation on the last two numbers.

For this tutorial, we’ll construct the training examples from t time steps and use the value at t+1 as the target. For example, if t=3, then the training examples and the corresponding target values would look as follows:

The SimpleRNN Network

In this section, we’ll write the basic code to generate the dataset and use a SimpleRNN network for predicting the next number of the Fibonacci sequence.

The Import Section

Let’s first write the import section:

Preparing The Dataset

The following function generates a sequence of n Fibonacci numbers (not counting the starting two values). If scale_data is set to True, then it would also use the MinMaxScaler from scikit-learn to scale the values between 0 and 1. Let’s see its output for n=10.

Next, we need a function get_fib_XY() that reformats the sequence into training examples and target values to be used by the Keras input layer. When given time_steps as a parameter, get_fib_XY() constructs each row of the dataset with time_steps number of columns. This function not only constructs the training set and test set from the Fibonacci sequence, but also shuffles the training examples and reshapes them to the required TensorFlow format, i.e., total_samples x time_steps x features. Also, the function returns the scaler object that scales the values if scale_data is set to True.

Let’s generate a small training set to see what it looks like. We have set time_steps=3, total_fib_numbers=12, with approximately 70% examples going towards the test points. Note the training and test examples have been shuffled by the permutation() function.

Setting Up The Network

Now let’s setup a small network with two layers. The first one being the SimpleRNN layer and the second one being the Dense layer. Below is a summary of the model.

Train The Network And Evaluate

The next step is to add code that generates a dataset, trains the network, and evaluates it. This time around, we’ll scale the data between 0 and 1. We don’t need to pass scale_data parameter as its default value is True.

As output you’ll see the progress of training and the following values of mean square error:

Adding A Custom Attention Layer To The Network

In Keras, it is easy to create a custom layer that implements attention by subclassing the Layer class. The Keras guide lists down clear steps for creating a new layer via subclassing. We’ll use those guidelines here. All the weights and biases corresponding to a single layer are encapsulated by this class. We need to write the __init__ method as well as override the following methods:

  • build(): Keras guide recommends adding weights in this method once the size of the inputs is known. This method ‘lazily’ creates weights. The builtin function add_weight() can be used to add weights and biases of the attention layer.
  • call(): The call() method implements the mapping of inputs to outputs. It should implement the forward pass during training.

The Call Method For Attention Layer

The call method of the attention layer has to compute the alignment scores, weights, and context. You can go through the details of these parameters in Stefania’s excellent article on The Attention Mechanism from Scratch. We’ll implement the Bahdanau attention in our call() method.

The good thing about inheriting a layer from the Keras Layer class and adding the weights via add_weights() method is that weights are automatically tuned. Keras does an equivalent of ‘reverse engineering’ of the operations/computations of the call() method and calculates the gradients during training. It is important to specify trainable=True when adding the weights. You can also add a train_step() method to your custom layer and specify your own method for weight training if needed.

The code below implements our custom attention layer.

RNN Network With Attention Layer

Let’s now add an attention layer to the RNN network we created earlier. The function create_RNN_with_attention() now specifies an RNN layer, attention layer and Dense layer in the network. Make sure to set return_sequences=True when specifying the SimpleRNN. This will return the output of the hidden units for all the previous time steps.

Let’s look at a summary of our model with attention.

Train And Evaluate The Deep Learning Network With Attention

It’s time to train and test our model and see how it performs on predicting the next Fibonacci number of a sequence.

You’ll see the training progress as output and the following:

We can see that even for this simple example, the mean square error on the test set is lower with the attention layer. You can achieve better results with hyper-parameter tuning and model selection. Do try this out on more complex problems and adding more layers to the network. You can also use the scaler object to scale the numbers back to their original values.

You can take this example one step further by using LSTM instead of SimpleRNN or you can build a network via convolution and pooling layers. You can also change this to an encoder decoder network if you like.

Consolidated Code

The entire code for this tutorial is pasted below if you would like to try it. Note that your outputs would be different from the ones given in this tutorial because of the stochastic nature of this algorithm.

Further Reading

This section provides more resources on the topic if you are looking to go deeper.

Books

Papers

Articles

Summary

In this tutorial, you discovered how to add a custom attention layer to a deep learning network using Keras.

Specifically, you learned:

  • How to override the Keras Layer class.
  • The method build() is required to add weights to the attention layer.
  • The call() method is required for specifying the mapping of inputs to outputs of the attention layer.
  • How to add a custom attention layer to the deep learning network built using SimpleRNN.

Do you have any questions about RNNs discussed in this post? Ask your questions in the comments below and I will do my best to answer.

 

 

 

 

 

 

 

 

 

 

, , , ,

7 Responses to Adding A Custom Attention Layer To Recurrent Neural Network In Keras

  1. 5cc October 13, 2021 at 1:41 pm #

    Hi,I have a question
    I have tried to use LSTM instead of simple_RNN with your help.
    then I found it only has train loss, I can not find the val_loss.
    So how can I monitor the overfitting problem?
    I would like to ask you for help.
    thank you very much!

    • Adrian Tam October 14, 2021 at 3:12 am #

      Likely you didn’t provide validation data when you called fit(), hence no validation has been performed. See this code snippet:

      history = model.fit(X_train, y_train, epochs=200, batch_size=16, validation_data=(X_test,y_test))

  2. Bhaskar October 18, 2021 at 10:31 pm #

    I am getting the following error : “NameError: name ‘Layer’ is not defined”

    • Adrian Tam October 20, 2021 at 8:50 am #

      Do you have “from keras.layers import Layer”?

  3. Dr. Fouz Sattar October 19, 2021 at 6:55 pm #

    Well structured and well described with clarity.

  4. Ray Huang October 26, 2021 at 12:54 pm #

    Thanks for the clarity explanation and example.
    I have some different result when executing the code above.
    The attention layer’s output should be (None, 2) above.
    However, I get (None, 20, 2) and cause dimensions doesn’t match error.

    The attention layer does output the (None, 2)
    But when it was concatenated to model it becomes (None, 20, 2)
    Could you please tell me what’s the problem?
    Thank you.

    • Adrian Tam October 27, 2021 at 2:58 am #

      It is hard to tell what’s wrong. Can you try to copy over the example code at the end of this post and compare with your version?

Leave a Reply