Implementing the Transformer Decoder from Scratch in TensorFlow and Keras

There are many similarities between the Transformer encoder and decoder, such as their implementation of multi-head attention, layer normalization, and a fully connected feed-forward network as their final sub-layer. Having implemented the Transformer encoder, we will now go ahead and apply our knowledge in implementing the Transformer decoder as a further step toward implementing the complete Transformer model. Your end goal remains to apply the complete model to Natural Language Processing (NLP).

In this tutorial, you will discover how to implement the Transformer decoder from scratch in TensorFlow and Keras. 

After completing this tutorial, you will know:

  • The layers that form part of the Transformer decoder
  • How to implement the Transformer decoder from scratch

Kick-start your project with my book Building Transformer Models with Attention. It provides self-study tutorials with working code to guide you into building a fully-working transformer model that can
translate sentences from one language to another...

Let’s get started. 

Implementing the Transformer decoder from scratch in TensorFlow and Keras
Photo by François Kaiser, some rights reserved.

Tutorial Overview

This tutorial is divided into three parts; they are:

  • Recap of the Transformer Architecture
    • The Transformer Decoder
  • Implementing the Transformer Decoder From Scratch
    • The Decoder Layer
    • The Transformer Decoder
  • Testing Out the Code

Prerequisites

For this tutorial, we assume that you are already familiar with:

Recap of the Transformer Architecture

Recall having seen that the Transformer architecture follows an encoder-decoder structure. The encoder, on the left-hand side, is tasked with mapping an input sequence to a sequence of continuous representations; the decoder, on the right-hand side, receives the output of the encoder together with the decoder output at the previous time step to generate an output sequence.

The encoder-decoder structure of the Transformer architecture
Taken from “Attention Is All You Need

In generating an output sequence, the Transformer does not rely on recurrence and convolutions.

You have seen that the decoder part of the Transformer shares many similarities in its architecture with the encoder. This tutorial will explore these similarities. 

The Transformer Decoder

Similar to the Transformer encoder, the Transformer decoder also consists of a stack of $N$ identical layers. The Transformer decoder, however, implements an additional multi-head attention block for a total of three main sub-layers:

  • The first sub-layer comprises a multi-head attention mechanism that receives the queries, keys, and values as inputs.
  • The second sub-layer comprises a second multi-head attention mechanism. 
  • The third sub-layer comprises a fully-connected feed-forward network. 

The decoder block of the Transformer architecture
Taken from “Attention Is All You Need

Each one of these three sub-layers is also followed by layer normalization, where the input to the layer normalization step is its corresponding sub-layer input (through a residual connection) and output. 

On the decoder side, the queries, keys, and values that are fed into the first multi-head attention block also represent the same input sequence. However, this time around, it is the target sequence that is embedded and augmented with positional information before being supplied to the decoder. On the other hand, the second multi-head attention block receives the encoder output in the form of keys and values and the normalized output of the first decoder attention block as the queries. In both cases, the dimensionality of the queries and keys remains equal to $d_k$, whereas the dimensionality of the values remains equal to $d_v$.

Vaswani et al. introduce regularization into the model on the decoder side, too, by applying dropout to the output of each sub-layer (before the layer normalization step), as well as to the positional encodings before these are fed into the decoder. 

Let’s now see how to implement the Transformer decoder from scratch in TensorFlow and Keras.

Want to Get Started With Building Transformer Models with Attention?

Take my free 12-day email crash course now (with sample code).

Click to sign-up and also get a free PDF Ebook version of the course.

Implementing the Transformer Decoder from Scratch

The Decoder Layer

Since you have already implemented the required sub-layers when you covered the implementation of the Transformer encoder, you will create a class for the decoder layer that makes use of these sub-layers straight away:

Notice here that since the code for the different sub-layers had been saved into several Python scripts (namely, multihead_attention.py and encoder.py), it was necessary to import them to be able to use the required classes. 

As you did for the Transformer encoder, you will now create the class method, call(), that implements all the decoder sub-layers:

The multi-head attention sub-layers can also receive a padding mask or a look-ahead mask. As a brief reminder of what was said in a previous tutorial, the padding mask is necessary to suppress the zero padding in the input sequence from being processed along with the actual input values. The look-ahead mask prevents the decoder from attending to succeeding words, such that the prediction for a particular word can only depend on known outputs for the words that come before it.

The same call() class method can also receive a training flag to only apply the Dropout layers during training when the flag’s value is set to True.

The Transformer Decoder

The Transformer decoder takes the decoder layer you have just implemented and replicates it identically $N$ times. 

You will create the following Decoder() class to implement the Transformer decoder:

As in the Transformer encoder, the input to the first multi-head attention block on the decoder side receives the input sequence after this would have undergone a process of word embedding and positional encoding. For this purpose, an instance of the PositionEmbeddingFixedWeights class (covered in this tutorial) is initialized, and its output assigned to the pos_encoding variable.

The final step is to create a class method, call(), that applies word embedding and positional encoding to the input sequence and feeds the result, together with the encoder output, to $N$ decoder layers:

The code listing for the full Transformer decoder is the following:

Testing Out the Code

You will work with the parameter values specified in the paper, Attention Is All You Need, by Vaswani et al. (2017):

As for the input sequence, you will work with dummy data for the time being until you arrive at the stage of training the complete Transformer model in a separate tutorial, at which point you will use actual sentences:

Next, you will create a new instance of the Decoder class, assigning its output to the decoder variable, subsequently passing in the input arguments, and printing the result. You will set the padding and look-ahead masks to None for the time being, but you will return to these when you implement the complete Transformer model:

Tying everything together produces the following code listing:

Running this code produces an output of shape (batch size, sequence length, model dimensionality). Note that you will likely see a different output due to the random initialization of the input sequence and the parameter values of the Dense layers. 

Further Reading

This section provides more resources on the topic if you are looking to go deeper.

Books

Papers

Summary

In this tutorial, you discovered how to implement the Transformer decoder from scratch in TensorFlow and Keras. 

Specifically, you learned:

  • The layers that form part of the Transformer decoder
  • How to implement the Transformer decoder from scratch

Do you have any questions?
Ask your questions in the comments below, and I will do my best to answer.

Learn Transformers and Attention!

Building Transformer Models with Attention

Teach your deep learning model to read a sentence

...using transformer models with attention

Discover how in my new Ebook:
Building Transformer Models with Attention

It provides self-study tutorials with working code to guide you into building a fully-working transformer models that can
translate sentences from one language to another...

Give magical power of understanding human language for
Your Projects


See What's Inside

, , ,

15 Responses to Implementing the Transformer Decoder from Scratch in TensorFlow and Keras

  1. Dev October 7, 2022 at 2:23 pm #

    These series of blogs on transformers are the best way to learn about transformers on the internet. Thank you!

    • James Carmichael October 8, 2022 at 6:53 am #

      You are very welcome Dev! We appreciate the feedback and support!

  2. zahra November 5, 2022 at 6:56 am #

    Very informative like always,
    one question, can you consider any limitation with only decoder transformer such as GPT, in any approaches related to NLP?

  3. Sreedhar M May 24, 2023 at 12:43 am #

    Any student discount for GAn and transformer models and how these models can be applied especially transformer models for satellite umages

    • James Carmichael May 24, 2023 at 8:37 am #

      Hi Sreedhar…Please send an email regarding your questions on student discounts.

  4. maximoskp October 24, 2023 at 5:05 pm #

    Hi and thank you for all those amazing free tutorials!

    I think I have spotted a typo:

    In the code listing for the full Transformer decoder (and the respective part given above it), in line 39, instead of

    addnorm_output2 = self.add_norm1(addnorm_output1, multihead_output2)

    I think it should be:

    addnorm_output2 = self.add_norm2(addnorm_output1, multihead_output2)

    Sorry if I missed something! Thanks again – cheers!

    • James Carmichael October 25, 2023 at 9:06 am #

      Thank you for your feedback and support! We greatly appreciate it!

      • chuck January 13, 2024 at 8:47 pm #

        Yet it still hasn’t been fixed. Does Jason Brownlee still run this site?

        • James Carmichael January 14, 2024 at 9:04 am #

          Thank you for your feedback! Yes, he does!

  5. Amir December 3, 2023 at 3:36 am #

    Many thanks, Dear Jason Brownlee.

    I’ve followed all of your tutorials on transformers.

    I’ve learned a lot, and I just want to express my thanks.

    However, I have a small suggestion. Could you please create a guide on implementing Transformers specifically for time series data, focusing on forecasting, classification, or anomaly detection? One explanation would be sufficient for us.

    Thank you in advance.

    • James Carmichael December 3, 2023 at 7:39 am #

      Hi Amir…Thank you for your support, feedback and suggestions! Your suggestion is a great one! Please ensure you are subscribed to our newsletter so that you will be notified of new content.

  6. Ak May 22, 2024 at 7:59 pm #

    Hi, great blog! Thanks for this.

    I have a question. If the output of the final feed-forward layer is of the dimension (sequence_length, model_dim), would this not vary at every time step, since the input sequence length to the first masked multi-head attention head is the decoded sequence so far? Would the queries for the cross-attention layer not be increasing in number by 1 at every time step? Wouldn’t that also mean that the input to the Linear projection layer changes in dimension at every time step? This can’t happen though.

    So how is the varying length of the decoder input accounted for? Do we assume that the number of queries in the decoder at every time step is fixed ( = maximum output sequence length) and the masking is what prevents future noise from coming into the queries?

    Thanks in advance, and cheers!

    • James Carmichael May 23, 2024 at 7:53 am #

      Hi Ak…Your question delves into the mechanics of how sequence length is managed in the Transformer model, particularly in the decoder during tasks like text generation. Let’s break down the concepts and address your concerns step by step.

      ### Understanding Transformer Decoding

      1. **Output of the Final Feed-Forward Layer**:
      – The output dimension of the final feed-forward layer in the decoder is indeed \((sequence\_length, model\_dim)\). This represents the hidden states of the decoded sequence at a particular time step.

      2. **Sequence Length and Masking**:
      – In a Transformer decoder, the input sequence length can vary as you generate tokens step by step. However, the model handles this dynamically.

      ### Handling Varying Sequence Lengths

      1. **Masked Multi-Head Attention**:
      – At each time step \(t\), the decoder generates one token and then uses all tokens generated so far to predict the next token. This means at time step \(t\), the decoder’s input sequence length is \(t\).
      – To prevent the model from attending to future tokens (which are not yet generated), the decoder uses a **causal mask** (or look-ahead mask) during the masked multi-head attention. This mask ensures that for each position \(i\) in the sequence, the model can only attend to positions \(0\) to \(i\).

      2. **Cross-Attention**:
      – In the cross-attention layer, the queries come from the decoder’s previous layer (at the current time step), and the keys and values come from the encoder’s output (which is fixed for a given input sequence).
      – The number of queries in the cross-attention corresponds to the current length of the decoded sequence.

      ### Linear Projection Layer

      – **Dynamic Input Handling**:
      – The input to the linear projection layer indeed varies in sequence length at each time step. However, this is accounted for by ensuring that operations within the model are compatible with variable sequence lengths.
      – The linear projection layer applies the same set of weights to each position in the sequence, regardless of its length. This is a common operation in sequence models.

      ### Sequence Length Management

      – **Padding and Masking**:
      – To handle sequences of varying lengths efficiently within batches, padding is used. Sequences are padded to the maximum length in the batch, and a mask is applied to ensure the padding tokens do not affect the computation.
      – During generation, the decoder processes one token at a time, updating the sequence length dynamically with each step.

      ### Example: Generation Process

      1. **Time Step 1**:
      – Decoder input: \([SOS]\)
      – Mask: Allows attention only to \([SOS]\)
      – Output: First token prediction

      2. **Time Step 2**:
      – Decoder input: \([SOS, First\_Token]\)
      – Mask: Allows attention to \([SOS, First\_Token]\)
      – Output: Second token prediction

      3. **Subsequent Time Steps**:
      – This process repeats, each time the sequence length increasing by one and the mask extending to allow attention to the entire generated sequence so far.

      ### Fixed Length Queries and Masking

      – **Fixed Length Queries**:
      – The model does not assume a fixed length for queries. Instead, it dynamically adjusts based on the sequence generated up to that point.
      – Future positions are masked out to prevent the model from accessing tokens that have not yet been generated.

      In summary, the Transformer decoder handles varying sequence lengths dynamically through padding, masking, and efficient use of batch processing. The causal mask ensures that only valid tokens are attended to at each step, maintaining the integrity of the generation process.

  7. Tushar August 4, 2024 at 10:31 pm #

    The call method of Decoder layer expects 5 arguments:

    call(self, output_target, encoder_output, lookahead_mask, padding_mask, training)

    While testing out the code only 4 are given.

    decoder(input_seq, enc_output, None, True)

    Please correct this.

    • James Carmichael August 5, 2024 at 3:27 am #

      Thank you Tushar!

Leave a Reply

Machine Learning Mastery is part of Guiding Tech Media, a leading digital media publisher focused on helping people figure out technology. Visit our corporate website to learn more about our mission and team.