How to Develop a Bidirectional LSTM For Sequence Classification in Python with Keras

Bidirectional LSTMs are an extension of traditional LSTMs that can improve model performance on sequence classification problems.

In problems where all timesteps of the input sequence are available, Bidirectional LSTMs train two instead of one LSTMs on the input sequence. The first on the input sequence as-is and the second on a reversed copy of the input sequence. This can provide additional context to the network and result in faster and even fuller learning on the problem.

In this tutorial, you will discover how to develop Bidirectional LSTMs for sequence classification in Python with the Keras deep learning library.

After completing this tutorial, you will know:

  • How to develop a small contrived and configurable sequence classification problem.
  • How to develop an LSTM and Bidirectional LSTM for sequence classification.
  • How to compare the performance of the merge mode used in Bidirectional LSTMs.

Let’s get started.

How to Develop a Bidirectional LSTM For Sequence Classification in Python with Keras

How to Develop a Bidirectional LSTM For Sequence Classification in Python with Keras
Photo by Cristiano Medeiros Dalbem, some rights reserved.

Overview

This tutorial is divided into 6 parts; they are:

  1. Bidirectional LSTMs
  2. Sequence Classification Problem
  3. LSTM For Sequence Classification
  4. Bidirectional LSTM For Sequence Classification
  5. Compare LSTM to Bidirectional LSTM
  6. Comparing Bidirectional LSTM Merge Modes

Environment

This tutorial assumes you have a Python SciPy environment installed. You can use either Python 2 or 3 with this example.

This tutorial assumes you have Keras (v2.0.4+) installed with either the TensorFlow (v1.1.0+) or Theano (v0.9+) backend.

This tutorial also assumes you have scikit-learn, Pandas, NumPy, and Matplotlib installed.

If you need help setting up your Python environment, see this post:

Need help with LSTMs for Sequence Prediction?

Take my free 7-day email course and discover 6 different LSTM architectures (with sample code).

Click to sign-up and also get a free PDF Ebook version of the course.

Start Your FREE Mini-Course Now!

Bidirectional LSTMs

The idea of Bidirectional Recurrent Neural Networks (RNNs) is straightforward.

It involves duplicating the first recurrent layer in the network so that there are now two layers side-by-side, then providing the input sequence as-is as input to the first layer and providing a reversed copy of the input sequence to the second.

To overcome the limitations of a regular RNN […] we propose a bidirectional recurrent neural network (BRNN) that can be trained using all available input information in the past and future of a specific time frame.

The idea is to split the state neurons of a regular RNN in a part that is responsible for the positive time direction (forward states) and a part for the negative time direction (backward states)

— Mike Schuster and Kuldip K. Paliwal, Bidirectional Recurrent Neural Networks, 1997

This approach has been used to great effect with Long Short-Term Memory (LSTM) Recurrent Neural Networks.

The use of providing the sequence bi-directionally was initially justified in the domain of speech recognition because there is evidence that the context of the whole utterance is used to interpret what is being said rather than a linear interpretation.

… relying on knowledge of the future seems at first sight to violate causality. How can we base our understanding of what we’ve heard on something that hasn’t been said yet? However, human listeners do exactly that. Sounds, words, and even whole sentences that at first mean nothing are found to make sense in the light of future context. What we must remember is the distinction between tasks that are truly online – requiring an output after every input – and those where outputs are only needed at the end of some input segment.

— Alex Graves and Jurgen Schmidhuber, Framewise Phoneme Classification with Bidirectional LSTM and Other Neural Network Architectures, 2005

The use of bidirectional LSTMs may not make sense for all sequence prediction problems, but can offer some benefit in terms of better results to those domains where it is appropriate.

We have found that bidirectional networks are significantly more effective than unidirectional ones…

— Alex Graves and Jurgen Schmidhuber, Framewise Phoneme Classification with Bidirectional LSTM and Other Neural Network Architectures, 2005

To be clear, timesteps in the input sequence are still processed one at a time, it is just the network steps through the input sequence in both directions at the same time.

Bidirectional LSTMs in Keras

Bidirectional LSTMs are supported in Keras via the Bidirectional layer wrapper.

This wrapper takes a recurrent layer (e.g. the first LSTM layer) as an argument.

It also allows you to specify the merge mode, that is how the forward and backward outputs should be combined before being passed on to the next layer. The options are:

  • sum‘: The outputs are added together.
  • mul‘: The outputs are multiplied together.
  • concat‘: The outputs are concatenated together (the default), providing double the number of outputs to the next layer.
  • ave‘: The average of the outputs is taken.

The default mode is to concatenate, and this is the method often used in studies of bidirectional LSTMs.

Sequence Classification Problem

We will define a simple sequence classification problem to explore bidirectional LSTMs.

The problem is defined as a sequence of random values between 0 and 1. This sequence is taken as input for the problem with each number provided one per timestep.

A binary label (0 or 1) is associated with each input. The output values are all 0. Once the cumulative sum of the input values in the sequence exceeds a threshold, then the output value flips from 0 to 1.

A threshold of 1/4 the sequence length is used.

For example, below is a sequence of 10 input timesteps (X):

The corresponding classification output (y) would be:

We can implement this in Python.

The first step is to generate a sequence of random values. We can use the random() function from the random module.

We can define the threshold as one-quarter the length of the input sequence.

The cumulative sum of the input sequence can be calculated using the cumsum() NumPy function. This function returns a sequence of cumulative sum values, e.g.:

We can then calculate the output sequence as whether each cumulative sum value exceeded the threshold.

The function below, named get_sequence(), draws all of this together, taking as input the length of the sequence, and returns the X and y components of a new problem case.

We can test this function with a new 10 timestep sequence as follows:

Running the example first prints the generated input sequence followed by the matching output sequence.

LSTM For Sequence Classification

We can start off by developing a traditional LSTM for the sequence classification problem.

Firstly, we must update the get_sequence() function to reshape the input and output sequences to be 3-dimensional to meet the expectations of the LSTM. The expected structure has the dimensions [samples, timesteps, features]. The classification problem has 1 sample (e.g. one sequence), a configurable number of timesteps, and one feature per timestep.

The classification problem has 1 sample (e.g. one sequence), a configurable number of timesteps, and one feature per timestep.

Therefore, we can reshape the sequences as follows.

The updated get_sequence() function is listed below.

We will define the sequences as having 10 timesteps.

Next, we can define an LSTM for the problem. The input layer will have 10 timesteps with 1 feature a piece, input_shape=(10, 1).

The first hidden layer will have 20 memory units and the output layer will be a fully connected layer that outputs one value per timestep. A sigmoid activation function is used on the output to predict the binary value.

A TimeDistributed wrapper layer is used around the output layer so that one value per timestep can be predicted given the full sequence provided as input. This requires that the LSTM hidden layer returns a sequence of values (one per timestep) rather than a single value for the whole input sequence.

Finally, because this is a binary classification problem, the binary log loss (binary_crossentropy in Keras) is used. The efficient ADAM optimization algorithm is used to find the weights and the accuracy metric is calculated and reported each epoch.

The LSTM will be trained for 1,000 epochs. A new random input sequence will be generated each epoch for the network to be fit on. This ensures that the model does not memorize a single sequence and instead can generalize a solution to solve all possible random input sequences for this problem.

Once trained, the network will be evaluated on yet another random sequence. The predictions will be then compared to the expected output sequence to provide a concrete example of the skill of the system.

The complete example is listed below.

Running the example prints the log loss and classification accuracy on the random sequences each epoch.

This provides a clear idea of how well the model has generalized a solution to the sequence classification problem.

We can see that the model does well, achieving a final accuracy that hovers around 90% and 100% accurate. Not perfect, but good for our purposes.

The predictions for a new random sequence are compared to the expected values, showing a mostly correct result with a single error.

Bidirectional LSTM For Sequence Classification

Now that we know how to develop an LSTM for the sequence classification problem, we can extend the example to demonstrate a Bidirectional LSTM.

We can do this by wrapping the LSTM hidden layer with a Bidirectional layer, as follows:

This will create two copies of the hidden layer, one fit in the input sequences as-is and one on a reversed copy of the input sequence. By default, the output values from these LSTMs will be concatenated.

That means that instead of the TimeDistributed layer receiving 10 timesteps of 20 outputs, it will now receive 10 timesteps of 40 (20 units + 20 units) outputs.

The complete example is listed below.

Running the example, we see a similar output as in the previous example.

The use of bidirectional LSTMs have the effect of allowing the LSTM to learn the problem faster.

This is not apparent from looking at the skill of the model at the end of the run, but instead, the skill of the model over time.

Compare LSTM to Bidirectional LSTM

In this example, we will compare the performance of traditional LSTMs to a Bidirectional LSTM over time while the models are being trained.

We will adjust the experiment so that the models are only trained for 250 epochs. This is so that we can get a clear idea of how learning unfolds for each model and how the learning behavior differs with bidirectional LSTMs.

We will compare three different models; specifically:

  1. LSTM (as-is)
  2. LSTM with reversed input sequences (e.g. you can do this by setting the “go_backwards” argument to he LSTM layer to “True”)
  3. Bidirectional LSTM

This comparison will help to show that bidirectional LSTMs can in fact add something more than simply reversing the input sequence.

We will define a function to create and return an LSTM with either forward or backward input sequences, as follows:

We can develop a similar function for bidirectional LSTMs where the merge mode can be specified as an argument. The default of concatenation can be specified by setting the merge mode to the value ‘concat’.

Finally, we define a function to fit a model and retrieve and store the loss each training epoch, then return a list of the collected loss values after the model is fit. This is so that we can graph the log loss from each model configuration and compare them.

Putting this all together, the complete example is listed below.

First a traditional LSTM is created and fit and the log loss values plot. This is repeated with an LSTM with reversed input sequences and finally an LSTM with a concatenated merge.

Running the example creates a line plot.

Your specific plot may vary in the details, but will show the same trends.

We can see that the LSTM forward (blue) and LSTM backward (orange) show similar log loss over the 250 training epochs.

We can see that the Bidirectional LSTM log loss is different (green), going down sooner to a lower value and generally staying lower than the other two configurations.

Line Plot of Log Loss for an LSTM, Reversed LSTM and a Bidirectional LSTM

Line Plot of Log Loss for an LSTM, Reversed LSTM and a Bidirectional LSTM

Comparing Bidirectional LSTM Merge Modes

There a 4 different merge modes that can be used to combine the outcomes of the Bidirectional LSTM layers.

They are concatenation (default), multiplication, average, and sum.

We can compare the behavior of different merge modes by updating the example from the previous section as follows:

Running the example will create a line plot comparing the log loss of each merge mode.

Your specific plot may differ but will show the same behavioral trends.

The different merge modes result in different model performance, and this will vary depending on your specific sequence prediction problem.

In this case, we can see that perhaps a sum (blue) and concatenation (red) merge mode may result in better performance, or at least lower log loss.

Line Plot to Compare Merge Modes for Bidirectional LSTMs

Line Plot to Compare Merge Modes for Bidirectional LSTMs

Summary

In this tutorial, you discovered how to develop Bidirectional LSTMs for sequence classification in Python with Keras.

Specifically, you learned:

  • How to develop a contrived sequence classification problem.
  • How to develop an LSTM and Bidirectional LSTM for sequence classification.
  • How to compare merge modes for Bidirectional LSTMs for sequence classification.

Do you have any questions?
Ask your questions in the comments below and I will do my best to answer.

Develop LSTMs for Sequence Prediction Today!

Long Short-Term Memory Networks with Python

Develop Your Own LSTM models in Minutes

…with just a few lines of python code

Discover how in my new Ebook:
Long Short-Term Memory Networks with Python

It provides self-study tutorials on topics like:
CNN LSTMs, Encoder-Decoder LSTMs, generative models, data preparation, making predictions and much more…

Finally Bring LSTM Recurrent Neural Networks to
Your Sequence Predictions Projects

Skip the Academics. Just Results.

Click to learn more.


16 Responses to How to Develop a Bidirectional LSTM For Sequence Classification in Python with Keras

  1. Siddharth June 18, 2017 at 4:33 pm #

    Great post! Do you think bidirectional LSTMs can be used for time series prediciton problems?

    • Jason Brownlee June 19, 2017 at 8:33 am #

      Yes, the question is, can they lift performance on your problem. Try it and see.

  2. Orozcohsu June 18, 2017 at 6:38 pm #

    Greatest ,thank you

  3. truongtrang June 19, 2017 at 3:17 pm #

    hi Jason,
    In fact, I usually need to use multi thread ( multi worker) for load model Keras for improve performance for my system. But when I use multi thread to work with model Keras, its so error with graph, so I used multi process instead. I wana ask you have another solution for multi worker with Keras? Hope you can understand what i say.
    Thank you.

  4. Yitzhak June 20, 2017 at 12:05 am #

    Thanks, Jason, such a great post !

  5. John Jaro July 2, 2017 at 12:19 am #

    hi Jason, thanks greatly for your work. I’ve read probably 50 of your blog articles!

    I’m still struggling to understand how to reshape lagged data for LSTM and would greatly appreciate your help.

    I’m working on sequence classification on time-series data over multiple days. I’ve lagged the data together (2D) and created differential features using code very similar to yours, and generated multiple look forward and look backward features over a window of about +5 and -4:


    # frame a sequence as a supervised learning problem
    def timeseries_to_supervised(data, lag=1):
    df = DataFrame(data)
    columns = [df.shift(i) for i in range(1, lag+1)]
    columns.append(df)
    df = concat(columns, axis=1)
    return df

    I’ve gotten decent results with Conv1D residual networks on my dataset, but my experiments with LSTM are total failures.

    I reshape the data for Conv1D like so: X = X.reshape(X.shape[0], X.shape[1], 1)

    Is this same data shape appropriate for LSTM or Bidirectional LSTM? I think it needs to be different, but I cannot figure out how despite hours of searching.

    Thanks for your assistance if any!

    • John Jaro July 2, 2017 at 12:27 am #

      By the way, my question is not a prediction task – it’s multi class classification: looking at a particular day’s data in combination with surrounding lagged/diff’d day’s data and saying it is one of 10 different types of events.

      • Jason Brownlee July 2, 2017 at 6:30 am #

        Great. Sequence classification.

        One day might be one sequence and be comprised of lots of time steps for lots of features.

    • Jason Brownlee July 2, 2017 at 6:29 am #

      Thanks John!

      Are you working on a sequence classification problem or sequence regression problem? Do you want to classify a whole sequence or predict the next value in the sequence? This will determine the type of LSTM you want.

      The input to LSTMs is 3d with the form [samples, time steps, features].

      Samples are sequences.
      Time steps are lag obs.
      Features are things measured at each time step.

      Does that help?

  6. jilian July 2, 2017 at 11:57 pm #

    Hello Jason,
    Thank you for this blog .
    i want to use a 2D LSTM (the same as gridlstm or multi diagonal LSTM) after CNN,the input is image with 3D RGB (W * H * D)
    does the keras develop GridLSTM or multi-directional LSTM.
    i saw the tensorflow develop the GridLSTM.can link it into keras?
    Thank you.

    • Jason Brownlee July 3, 2017 at 5:34 am #

      You can use a CNN as a front-end model for LSTM.

      Sorry, I’ve not heard of “grid lstm” or “multi-directional lstm”.

  7. Marianico August 3, 2017 at 7:32 pm #

    Nice post, Jason! I have two questions:

    1.- May Bidirectional() work in a regression model without TimeDistributed() wrapper?
    2.- May I have two Bidirectional() layers, or the model would be a far too complex?
    3.- Does Bidirectional() requires more input data to train?

    Thank you in advance! 🙂

    • Jason Brownlee August 4, 2017 at 6:58 am #

      Hi Marianico,

      1. sure.
      2. you can if you want, try it.
      3. it may, test this assumption.

Leave a Reply