SALE! Use code blackfriday for 40% off everything!
Hurry, sale ends soon! Click to see the full catalog.

Training the Transformer Model

Last Updated on November 16, 2022

We have put together the complete Transformer model, and now we are ready to train it for neural machine translation. We shall use a training dataset for this purpose, which contains short English and German sentence pairs. We will also revisit the role of masking in computing the accuracy and loss metrics during the training process. 

In this tutorial, you will discover how to train the Transformer model for neural machine translation. 

After completing this tutorial, you will know:

  • How to prepare the training dataset
  • How to apply a padding mask to the loss and accuracy computations
  • How to train the Transformer model

Kick-start your project with my book Building Transformer Models with Attention. It provides self-study tutorials with working code to guide you into building a fully-working transformer model that can
translate sentences from one language to another...

Let’s get started. 

Training the transformer model
Photo by v2osk, some rights reserved.

Tutorial Overview

This tutorial is divided into four parts; they are:

  • Recap of the Transformer Architecture
  • Preparing the Training Dataset
  • Applying a Padding Mask to the Loss and Accuracy Computations
  • Training the Transformer Model

Prerequisites

For this tutorial, we assume that you are already familiar with:

Recap of the Transformer Architecture

Recall having seen that the Transformer architecture follows an encoder-decoder structure. The encoder, on the left-hand side, is tasked with mapping an input sequence to a sequence of continuous representations; the decoder, on the right-hand side, receives the output of the encoder together with the decoder output at the previous time step to generate an output sequence.

The encoder-decoder structure of the Transformer architecture
Taken from “Attention Is All You Need

In generating an output sequence, the Transformer does not rely on recurrence and convolutions.

You have seen how to implement the complete Transformer model, so you can now proceed to train it for neural machine translation. 

Let’s start first by preparing the dataset for training. 

Preparing the Training Dataset

For this purpose, you can refer to a previous tutorial that covers material about preparing the text data for training. 

You will also use a dataset that contains short English and German sentence pairs, which you may download here. This particular dataset has already been cleaned by removing non-printable and non-alphabetic characters and punctuation characters, further normalizing all Unicode characters to ASCII, and changing all uppercase letters to lowercase ones. Hence, you can skip the cleaning step, which is typically part of the data preparation process. However, if you use a dataset that does not come readily cleaned, you can refer to this this previous tutorial to learn how to do so. 

Let’s proceed by creating the PrepareDataset class that implements the following steps:

  • Loads the dataset from a specified filename. 

  • Selects the number of sentences to use from the dataset. Since the dataset is large, you will reduce its size to limit the training time. However, you may explore using the full dataset as an extension to this tutorial.

  • Appends start (<START>) and end-of-string (<EOS>) tokens to each sentence. For example, the English sentence, i like to run, now becomes, <START> i like to run <EOS>. This also applies to its corresponding translation in German, ich gehe gerne joggen, which now becomes, <START> ich gehe gerne joggen <EOS>.

  • Shuffles the dataset randomly. 

  • Splits the shuffled dataset based on a pre-defined ratio.

  • Creates and trains a tokenizer on the text sequences that will be fed into the encoder and finds the length of the longest sequence as well as the vocabulary size. 

  • Tokenizes the sequences of text that will be fed into the encoder by creating a vocabulary of words and replacing each word with its corresponding vocabulary index. The <START> and <EOS> tokens will also form part of this vocabulary. Each sequence is also padded to the maximum phrase length.  

  • Creates and trains a tokenizer on the text sequences that will be fed into the decoder, and finds the length of the longest sequence as well as the vocabulary size.

  • Repeats a similar tokenization and padding procedure for the sequences of text that will be fed into the decoder.

The complete code listing is as follows (refer to this previous tutorial for further details):

Before moving on to train the Transformer model, let’s first have a look at the output of the PrepareDataset class corresponding to the first sentence in the training dataset:

(Note: Since the dataset has been randomly shuffled, you will likely see a different output.)

You can see that, originally, you had a three-word sentence (did tom tell you) to which you appended the start and end-of-string tokens. Then you proceeded to vectorize (you may notice that the <START> and <EOS> tokens are assigned the vocabulary indices 1 and 2, respectively). The vectorized text was also padded with zeros, such that the length of the end result matches the maximum sequence length of the encoder:

You can similarly check out the corresponding target data that is fed into the decoder:

Here, the length of the end result matches the maximum sequence length of the decoder:

Applying a Padding Mask to the Loss and Accuracy Computations

Recall seeing that the importance of having a padding mask at the encoder and decoder is to make sure that the zero values that we have just appended to the vectorized inputs are not processed along with the actual input values. 

This also holds true for the training process, where a padding mask is required so that the zero padding values in the target data are not considered in the computation of the loss and accuracy.

Let’s have a look at the computation of loss first. 

This will be computed using a sparse categorical cross-entropy loss function between the target and predicted values and subsequently multiplied by a padding mask so that only the valid non-zero values are considered. The returned loss is the mean of the unmasked values:

For the computation of accuracy, the predicted and target values are first compared. The predicted output is a tensor of size (batch_size, dec_seq_length, dec_vocab_size) and contains probability values (generated by the softmax function on the decoder side) for the tokens in the output. In order to be able to perform the comparison with the target values, only each token with the highest probability value is considered, with its dictionary index being retrieved through the operation: argmax(prediction, axis=2). Following the application of a padding mask, the returned accuracy is the mean of the unmasked values:

Training the Transformer Model

Let’s first define the model and training parameters as specified by Vaswani et al. (2017):

(Note: Only consider two epochs to limit the training time. However, you may explore training the model further as an extension to this tutorial.)

You also need to implement a learning rate scheduler that initially increases the learning rate linearly for the first warmup_steps and then decreases it proportionally to the inverse square root of the step number. Vaswani et al. express this by the following formula: 

$$\text{learning_rate} = \text{d_model}^{−0.5} \cdot \text{min}(\text{step}^{−0.5}, \text{step} \cdot \text{warmup_steps}^{−1.5})$$

 

An instance of the LRScheduler class is subsequently passed on as the learning_rate argument of the Adam optimizer:

Next,  split the dataset into batches in preparation for training:

This is followed by the creation of a model instance:

In training the Transformer model, you will write your own training loop, which incorporates the loss and accuracy functions that were implemented earlier. 

The default runtime in Tensorflow 2.0 is eager execution, which means that operations execute immediately one after the other. Eager execution is simple and intuitive, making debugging easier. Its downside, however, is that it cannot take advantage of the global performance optimizations that run the code using the graph execution. In graph execution, a graph is first built before the tensor computations can be executed, which gives rise to a computational overhead. For this reason, the use of graph execution is mostly recommended for large model training rather than for small model training, where eager execution may be more suited to perform simpler operations. Since the Transformer model is sufficiently large, apply the graph execution to train it. 

In order to do so, you will use the @function decorator as follows:

With the addition of the @function decorator, a function that takes tensors as input will be compiled into a graph. If the @function decorator is commented out, the function is, alternatively, run with eager execution. 

The next step is implementing the training loop that will call the train_step function above. The training loop will iterate over the specified number of epochs and the dataset batches. For each batch, the train_step function computes the training loss and accuracy measures and applies the optimizer to update the trainable model parameters. A checkpoint manager is also included to save a checkpoint after every five epochs:

An important point to keep in mind is that the input to the decoder is offset by one position to the right with respect to the encoder input. The idea behind this offset, combined with a look-ahead mask in the first multi-head attention block of the decoder, is to ensure that the prediction for the current token can only depend on the previous tokens. 

This masking, combined with fact that the output embeddings are offset by one position, ensures that the predictions for position i can depend only on the known outputs at positions less than i.

Attention Is All You Need, 2017. 

It is for this reason that the encoder and decoder inputs are fed into the Transformer model in the following manner:

encoder_input = train_batchX[:, 1:]

decoder_input = train_batchY[:, :-1]

Putting together the complete code listing produces the following:

Running the code produces a similar output to the following (you will likely see different loss and accuracy values because the training is from scratch, whereas the training time depends on the computational resources that you have available for training):

It takes 155.13s for the code to run using eager execution alone on the same platform that is making use of only a CPU, which shows the benefit of using graph execution. 

Further Reading

This section provides more resources on the topic if you are looking to go deeper.

Books

Papers

Websites

Summary

In this tutorial, you discovered how to train the Transformer model for neural machine translation.

Specifically, you learned:

  • How to prepare the training dataset
  • How to apply a padding mask to the loss and accuracy computations
  • How to train the Transformer model

Do you have any questions?
Ask your questions in the comments below, and I will do my best to answer.

Learn Transformers and Attention!

Building Transformer Models with Attention

Teach your deep learning model to read a sentence

...using transformer models with attention

Discover how in my new Ebook:
Building Transformer Models with Attention

It provides self-study tutorials with working code to guide you into building a fully-working transformer models that can
translate sentences from one language to another...

Give magical power of understanding human language for
Your Projects


See What's Inside

, , ,

9 Responses to Training the Transformer Model

  1. Jack October 14, 2022 at 2:51 pm #

    Dear Dr.Stefania Cristina

    When from model import TransformerModel, it shows No module named ‘model’, how to fix it?
    Thank you

    • Stefania Cristina October 14, 2022 at 6:02 pm #

      Hi Jack, model is a Python (.py) script file in which I had saved the TransformerModel class that we had previously put together in this tutorial: https://machinelearningmastery.com/joining-the-transformer-encoder-and-decoder-and-masking/. You can create one with the same name (model.py) for yourself too.

      • Jack October 14, 2022 at 7:42 pm #

        I get it, thank you. Dr.Stefania Cristina

        • Jack October 17, 2022 at 12:47 pm #

          Dear Dr Stefania Cristina

          This tutorial is used for neural machine translation, if I need to use it for data classification and data regression tasks how to change it?

          Cheers
          Jack

  2. Jack October 17, 2022 at 12:37 pm #

    Dear Dr Stefania Cristina

    To avoid train again, the trained checkpoint needs to be loaded, how to code it?

    Thank you

  3. Sacha van Weeren October 21, 2022 at 3:12 am #

    thanks for the blogs so far. This has been a very interesting series.

    Please note there is a bug in the PrepareDataset class … trainY = enc_tokenizer.texts_to_sequences(train[:, 1]). This should be dec_tokenizer instead of enc_tokenizer. If you use this class as is most sequences will only return 2 tokens as it will not recognize the german word in the english vocab.

    • James Carmichael October 21, 2022 at 7:39 am #

      Thank you for the feedback Sacha! We appreciate it!

  4. Michel November 20, 2022 at 1:10 am #

    Thank you for this awesome tutorial. I want to know how I can adapt this same code for protein desordered region prediction.
    Input is a sequence of protein and output is a binary sequence (1 if the corresponding residus is desordered and 0 otherwise.)
    Example:
    input: AAAALLLLAKKK
    output: 111111100001

Leave a Reply