[New Book] Click to get The Beginner's Guide to Data Science!
Use the offer code 20offearlybird to get 20% off. Hurry, sale ends soon!

Implementing the Transformer Encoder from Scratch in TensorFlow and Keras

Having seen how to implement the scaled dot-product attentionĀ and integrate it within the multi-head attention of the Transformer model, let’s progress one step further toward implementing a complete Transformer model by applying its encoder. Our end goal remains to apply the complete model to Natural Language Processing (NLP).

In this tutorial, you will discover how to implement the Transformer encoder from scratch in TensorFlow and Keras.Ā 

After completing this tutorial, you will know:

  • The layers that form part of the Transformer encoder.
  • How to implement the Transformer encoder from scratch.Ā  Ā 

Kick-start your project with my book Building Transformer Models with Attention. It provides self-study tutorials with working code to guide you into building a fully-working transformer model that can
translate sentences from one language to another...

Letā€™s get started.Ā 

Implementing the transformer encoder from scratch in TensorFlow and Keras
Photo by ian dooley, some rights reserved.

Tutorial Overview

This tutorial is divided into three parts; they are:

  • Recap of the Transformer Architecture
    • The Transformer Encoder
  • Implementing the Transformer Encoder From Scratch
    • The Fully Connected Feed-Forward Neural Network and Layer Normalization
    • The Encoder Layer
    • The Transformer Encoder
  • Testing Out the Code

Prerequisites

For this tutorial, we assume that you are already familiar with:

Recap of the Transformer Architecture

Recall having seen that the Transformer architecture follows an encoder-decoder structure. The encoder, on the left-hand side, is tasked with mapping an input sequence to a sequence of continuous representations; the decoder, on the right-hand side, receives the output of the encoder together with the decoder output at the previous time step to generate an output sequence.

The encoder-decoder structure of the Transformer architecture
Taken from “Attention Is All You Need

In generating an output sequence, the Transformer does not rely on recurrence and convolutions.

You have seen that the decoder part of the Transformer shares many similarities in its architecture with the encoder. In this tutorial, you will focus on the components that form part of the Transformer encoder. Ā 

The Transformer Encoder

The Transformer encoder consists of a stack of $N$ identical layers, where each layer further consists of two main sub-layers:

  • The first sub-layer comprises a multi-head attention mechanism that receives the queries, keys, and values as inputs.
  • A second sub-layer comprises a fully-connected feed-forward network.Ā 

The encoder block of the Transformer architecture
Taken from “Attention Is All You Need

Following each of these two sub-layers is layer normalization, into which the sub-layer input (through a residual connection) and output are fed. The output of each layer normalization step is the following:

LayerNorm(Sublayer Input + Sublayer Output)

In order to facilitate such an operation, which involves an addition between the sublayer input and output, Vaswani et al. designed all sub-layers and embedding layers in the model to produce outputs of dimension, $d_{\text{model}}$ = 512.

Also, recall the queries, keys, and values as the inputs to the Transformer encoder.

Here, the queries, keys, and values carry the same input sequence after this has been embedded and augmented by positional information, where the queries and keys are of dimensionality, $d_k$, and the dimensionality of the values is $d_v$.

Furthermore, Vaswani et al. also introduce regularization into the model by applying a dropout to the output of each sub-layer (before the layer normalization step), as well as to the positional encodings before these are fed into the encoder.Ā 

Letā€™s now see how to implement the Transformer encoder from scratch in TensorFlow and Keras.

Want to Get Started With Building Transformer Models with Attention?

Take my free 12-day email crash course now (with sample code).

Click to sign-up and also get a free PDF Ebook version of the course.

Implementing the Transformer Encoder from Scratch

The Fully Connected Feed-Forward Neural Network and Layer Normalization

Let’s begin by creating classes for the Feed Forward and Add & Norm layers that are shown in the diagram above.

Vaswani et al. tell us that the fully connected feed-forward network consists of two linear transformations with a ReLU activation in between. The first linear transformation produces an output of dimensionality, $d_{ff}$ = 2048, while the second linear transformation produces an output of dimensionality, $d_{\text{model}}$ = 512.

For this purpose, letā€™s first create the class FeedForward that inherits from the Layer base class in Keras and initialize the dense layers and the ReLU activation:

We will add to it the class method, call(), that receives an input and passes it through the two fully connected layers with ReLU activation, returning an output of dimensionality equal to 512:

The next step is to create another class, AddNormalization, that also inherits from the Layer base class in Keras and initialize a Layer normalization layer:

In it, include the following class method that sums its sub-layerā€™s input and output, which it receives as inputs, and applies layer normalization to the result:

The Encoder Layer

Next, you will implement the encoder layer, which the Transformer encoder will replicate identically $N$ times.Ā 

For this purpose, letā€™s create the class, EncoderLayer, and initialize all the sub-layers that it consists of:

Here, you may notice that you have initialized instances of the FeedForward and AddNormalization classes, which you just created in the previous section, and assigned their output to the respective variables, feed_forward and add_norm (1 and 2). The Dropout layer is self-explanatory, where the rate defines the frequency at which the input units are set to 0. You created the MultiHeadAttention class in a previous tutorial, and if you saved the code into a separate Python script, then do not forget to import it. I saved mine in a Python script named multihead_attention.py, and for this reason, I need to include the line of code from multihead_attention import MultiHeadAttention.

Letā€™s now proceed to create the class method, call(), that implements all the encoder sub-layers:

In addition to the input data, the call() method can also receive a padding mask. As a brief reminder of what was said in a previous tutorial, the padding mask is necessary to suppress the zero padding in the input sequence from being processed along with the actual input values.Ā 

The same class method can receive a training flag which, when set to True, will only apply the Dropout layers during training.

The Transformer Encoder

The last step is to create a class for the Transformer encoder, which should be named Encoder:

The Transformer encoder receives an input sequence after this would have undergone a process of word embedding and positional encoding. In order to compute the positional encoding, let’s make use of the PositionEmbeddingFixedWeights class described by Mehreen Saeed in this tutorial.Ā 

As you have similarly done in the previous sections, here, you will also create a class method, call(), that applies word embedding and positional encoding to the input sequence and feeds the result to $N$ encoder layers:

The code listing for the full Transformer encoder is the following:

Testing Out the Code

You will work with the parameter values specified in the paper, Attention Is All You Need, by Vaswani et al. (2017):

As for the input sequence, you will work with dummy data for the time being until you arrive at the stage of training the complete Transformer model in a separate tutorial, at which point you will be using actual sentences:

Next, you will create a new instance of the Encoder class, assigning its output to the encoder variable,Ā  subsequently feeding in the input arguments, and printing the result. You will set the padding mask argument to None for the time being, but you will return to this when you implement the complete Transformer model:

Tying everything together produces the following code listing:

Running this code produces an output of shape (batch size, sequence length, model dimensionality). Note that you will likely see a different output due to the random initialization of the input sequence and the parameter values of the Dense layers.Ā 

Further Reading

This section provides more resources on the topic if you are looking to go deeper.

Books

Papers

Summary

In this tutorial, you discovered how to implement the Transformer encoder from scratch in TensorFlow and Keras.

Specifically, you learned:

  • The layers that form part of the Transformer encoder
  • How to implement the Transformer encoder from scratch

Do you have any questions?
Ask your questions in the comments below, and I will do my best to answer.

Learn Transformers and Attention!

Building Transformer Models with Attention

Teach your deep learning model to read a sentence

...using transformer models with attention

Discover how in my new Ebook:
Building Transformer Models with Attention

It provides self-study tutorials with working code to guide you into building a fully-working transformer models that can
translate sentences from one language to another...

Give magical power of understanding human language for
Your Projects


See What's Inside

, , ,

5 Responses to Implementing the Transformer Encoder from Scratch in TensorFlow and Keras

  1. Avatar
    Ashwanth Kumar D January 23, 2023 at 9:42 pm #

    Hi Stefania,
    Thanks for the wonderful post. I am also reading the book “Building Transformer Models with Attention”. I have a question from “chapter 14.4 Positional Encoding in Transformers”.

    Here, I did not get the reason why you are using

    word_embedding_matrix = self.get_position_encoding(vocab_size, output_dim)

    in order to initialize the word_embedding_matrix quoting the paper “Attention Is All You Need”.

    I did not understand how this method “get_position_encoding” represents the words as embeddings. Can you please help me here?

    Thanks!!

    • Avatar
      Adrian Tam January 24, 2023 at 2:34 am #

      It may be confusing but try to think about that as a way to generate a random matrix. The word embedding matrix is simply to encode words (which is in the order of 10,000 or them) into a shorter vector (e.g., 50 floats). We do not want two unrelated, distinct words to share the same vector. Hence the embedding matrix is best to be randomized, but also guaranteed not to “collide”. It happens that the positional encoding fulfilled this property and hence it is abused here.

      In fact, you can also use numpy.random to generate a random matrix. But in this case, it is best to set “trainable=True” too to let Keras to fine-tune it to avoid unwanted collision.

  2. Avatar
    Ashwanth Kumar D January 24, 2023 at 5:10 am #

    Okay, this makes sense now. Thanks for the clarification, Adrian. MachineLearningMastery is always been my best tutor in learning Data Science. Keep it up!!!

  3. Avatar
    Bashir August 8, 2023 at 6:29 pm #

    hi sir, i am trying to apply above code for univariate wind power data. is it possible to alter the above code for wind power (time series ) data?

    • Avatar
      James Carmichael August 9, 2023 at 10:04 am #

      Hi Bashir…Yes this is possible. Please proceed and let us know if you have any questions!

Leave a Reply