SALE! Use code blackfriday for 40% off everything!
Hurry, sale ends soon! Click to see the full catalog.

Implementing the Transformer Decoder from Scratch in TensorFlow and Keras

Last Updated on November 16, 2022

There are many similarities between the Transformer encoder and decoder, such as their implementation of multi-head attention, layer normalization, and a fully connected feed-forward network as their final sub-layer. Having implemented the Transformer encoder, we will now go ahead and apply our knowledge in implementing the Transformer decoder as a further step toward implementing the complete Transformer model. Your end goal remains to apply the complete model to Natural Language Processing (NLP).

In this tutorial, you will discover how to implement the Transformer decoder from scratch in TensorFlow and Keras. 

After completing this tutorial, you will know:

  • The layers that form part of the Transformer decoder
  • How to implement the Transformer decoder from scratch

Kick-start your project with my book Building Transformer Models with Attention. It provides self-study tutorials with working code to guide you into building a fully-working transformer model that can
translate sentences from one language to another...

Let’s get started. 

Implementing the Transformer decoder from scratch in TensorFlow and Keras
Photo by François Kaiser, some rights reserved.

Tutorial Overview

This tutorial is divided into three parts; they are:

  • Recap of the Transformer Architecture
    • The Transformer Decoder
  • Implementing the Transformer Decoder From Scratch
    • The Decoder Layer
    • The Transformer Decoder
  • Testing Out the Code

Prerequisites

For this tutorial, we assume that you are already familiar with:

Recap of the Transformer Architecture

Recall having seen that the Transformer architecture follows an encoder-decoder structure. The encoder, on the left-hand side, is tasked with mapping an input sequence to a sequence of continuous representations; the decoder, on the right-hand side, receives the output of the encoder together with the decoder output at the previous time step to generate an output sequence.

The encoder-decoder structure of the Transformer architecture
Taken from “Attention Is All You Need

In generating an output sequence, the Transformer does not rely on recurrence and convolutions.

You have seen that the decoder part of the Transformer shares many similarities in its architecture with the encoder. This tutorial will explore these similarities. 

The Transformer Decoder

Similar to the Transformer encoder, the Transformer decoder also consists of a stack of $N$ identical layers. The Transformer decoder, however, implements an additional multi-head attention block for a total of three main sub-layers:

  • The first sub-layer comprises a multi-head attention mechanism that receives the queries, keys, and values as inputs.
  • The second sub-layer comprises a second multi-head attention mechanism. 
  • The third sub-layer comprises a fully-connected feed-forward network. 

The decoder block of the Transformer architecture
Taken from “Attention Is All You Need

Each one of these three sub-layers is also followed by layer normalization, where the input to the layer normalization step is its corresponding sub-layer input (through a residual connection) and output. 

On the decoder side, the queries, keys, and values that are fed into the first multi-head attention block also represent the same input sequence. However, this time around, it is the target sequence that is embedded and augmented with positional information before being supplied to the decoder. On the other hand, the second multi-head attention block receives the encoder output in the form of keys and values and the normalized output of the first decoder attention block as the queries. In both cases, the dimensionality of the queries and keys remains equal to $d_k$, whereas the dimensionality of the values remains equal to $d_v$.

Vaswani et al. introduce regularization into the model on the decoder side, too, by applying dropout to the output of each sub-layer (before the layer normalization step), as well as to the positional encodings before these are fed into the decoder. 

Let’s now see how to implement the Transformer decoder from scratch in TensorFlow and Keras.

Implementing the Transformer Decoder from Scratch

The Decoder Layer

Since you have already implemented the required sub-layers when you covered the implementation of the Transformer encoder, you will create a class for the decoder layer that makes use of these sub-layers straight away:

Notice here that since the code for the different sub-layers had been saved into several Python scripts (namely, multihead_attention.py and encoder.py), it was necessary to import them to be able to use the required classes. 

As you did for the Transformer encoder, you will now create the class method, call(), that implements all the decoder sub-layers:

The multi-head attention sub-layers can also receive a padding mask or a look-ahead mask. As a brief reminder of what was said in a previous tutorial, the padding mask is necessary to suppress the zero padding in the input sequence from being processed along with the actual input values. The look-ahead mask prevents the decoder from attending to succeeding words, such that the prediction for a particular word can only depend on known outputs for the words that come before it.

The same call() class method can also receive a training flag to only apply the Dropout layers during training when the flag’s value is set to True.

The Transformer Decoder

The Transformer decoder takes the decoder layer you have just implemented and replicates it identically $N$ times. 

You will create the following Decoder() class to implement the Transformer decoder:

As in the Transformer encoder, the input to the first multi-head attention block on the decoder side receives the input sequence after this would have undergone a process of word embedding and positional encoding. For this purpose, an instance of the PositionEmbeddingFixedWeights class (covered in this tutorial) is initialized, and its output assigned to the pos_encoding variable.

The final step is to create a class method, call(), that applies word embedding and positional encoding to the input sequence and feeds the result, together with the encoder output, to $N$ decoder layers:

The code listing for the full Transformer decoder is the following:

Testing Out the Code

You will work with the parameter values specified in the paper, Attention Is All You Need, by Vaswani et al. (2017):

As for the input sequence, you will work with dummy data for the time being until you arrive at the stage of training the complete Transformer model in a separate tutorial, at which point you will use actual sentences:

Next, you will create a new instance of the Decoder class, assigning its output to the decoder variable, subsequently passing in the input arguments, and printing the result. You will set the padding and look-ahead masks to None for the time being, but you will return to these when you implement the complete Transformer model:

Tying everything together produces the following code listing:

Running this code produces an output of shape (batch size, sequence length, model dimensionality). Note that you will likely see a different output due to the random initialization of the input sequence and the parameter values of the Dense layers. 

Further Reading

This section provides more resources on the topic if you are looking to go deeper.

Books

Papers

Summary

In this tutorial, you discovered how to implement the Transformer decoder from scratch in TensorFlow and Keras. 

Specifically, you learned:

  • The layers that form part of the Transformer decoder
  • How to implement the Transformer decoder from scratch

Do you have any questions?
Ask your questions in the comments below, and I will do my best to answer.

Learn Transformers and Attention!

Building Transformer Models with Attention

Teach your deep learning model to read a sentence

...using transformer models with attention

Discover how in my new Ebook:
Building Transformer Models with Attention

It provides self-study tutorials with working code to guide you into building a fully-working transformer models that can
translate sentences from one language to another...

Give magical power of understanding human language for
Your Projects


See What's Inside

, , ,

3 Responses to Implementing the Transformer Decoder from Scratch in TensorFlow and Keras

  1. Dev October 7, 2022 at 2:23 pm #

    These series of blogs on transformers are the best way to learn about transformers on the internet. Thank you!

    • James Carmichael October 8, 2022 at 6:53 am #

      You are very welcome Dev! We appreciate the feedback and support!

  2. zahra November 5, 2022 at 6:56 am #

    Very informative like always,
    one question, can you consider any limitation with only decoder transformer such as GPT, in any approaches related to NLP?

Leave a Reply