Gentle Introduction to Models for Sequence Prediction with Recurrent Neural Networks

Sequence prediction is a problem that involves using historical sequence information to predict the next value or values in the sequence.

The sequence may be symbols like letters in a sentence or real values like those in a time series of prices. Sequence prediction may be easiest to understand in the context of time series forecasting as the problem is already generally understood.

In this post, you will discover the standard sequence prediction models that you can use to frame your own sequence prediction problems.

After reading this post, you will know:

  • How sequence prediction problems are modeled with recurrent neural networks.
  • The 4 standard sequence prediction models used by recurrent neural networks.
  • The 2 most common misunderstandings made by beginners when applying sequence prediction models.

Let’s get started.

Tutorial Overview

This tutorial is divided into 4 parts; they are:

  1. Sequence Prediction with Recurrent Neural Networks
  2. Models for Sequence Prediction
  3. Cardinality from Timesteps not Features
  4. Two Common Misunderstandings by Practitioners

Sequence Prediction with Recurrent Neural Networks

Recurrent Neural Networks, like Long Short-Term Memory (LSTM) networks, are designed for sequence prediction problems.

In fact, at the time of writing, LSTMs achieve state-of-the-art results in challenging sequence prediction problems like neural machine translation (translating English to French).

LSTMs work by learning a function (f(…)) that maps input sequence values (X) onto output sequence values (y).

The learned mapping function is static and may be thought of as a program that takes input variables and uses internal variables. Internal variables are represented by an internal state maintained by the network and built up or accumulated over each value in the input sequence.

… RNNs combine the input vector with their state vector with a fixed (but learned) function to produce a new state vector. This can in programming terms be interpreted as running a fixed program with certain inputs and some internal variables.

— Andrej Karpathy, The Unreasonable Effectiveness of Recurrent Neural Networks, 2015

The static mapping function may be defined with a different number of inputs or outputs, as we will review in the next section.

Need help with LSTMs for Sequence Prediction?

Take my free 7-day email course and discover 6 different LSTM architectures (with sample code).

Click to sign-up and also get a free PDF Ebook version of the course.

Start Your FREE Mini-Course Now!

Models for Sequence Prediction

In this section, will review the 4 primary models for sequence prediction.

We will use the following terminology:

  • X: The input sequence value, may be delimited by a time step, e.g. X(1).
  • u: The hidden state value, may be delimited by a time step, e.g. u(1).
  • y: The output sequence value, may be delimited by a time step, e.g. y(1).

One-to-One Model

A one-to-one model produces one output value for each input value.

One-to-One Sequence Prediction Model

One-to-One Sequence Prediction Model

The internal state for the first time step is zero; from that point onward, the internal state is accumulated over the prior time steps.

One-to-One Sequence Prediction Model Over Time

One-to-One Sequence Prediction Model Over Time

In the case of a sequence prediction, this model would produce one time step forecast for each observed time step received as input.

This is a poor use for RNNs as the model has no chance to learn over input or output time steps (e.g. BPTT). If you find implementing this model for sequence prediction, you may intend to be using a many-to-one model instead.

One-to-Many Model

A one-to-many model produces multiple output values for one input value.

One-to-Many Sequence Prediction Model

One-to-Many Sequence Prediction Model

The internal state is accumulated as each value in the output sequence is produced.

This model can be used for image captioning where one image is provided as input and a sequence of words are generated as output.

Many-to-One Model

A many-to-one model produces one output value after receiving multiple input values.

Many-to-One Sequence Prediction Model

Many-to-One Sequence Prediction Model

The internal state is accumulated with each input value before a final output value is produced.

In the case of time series, this model would use a sequence of recent observations to forecast the next time step. This architecture would represent the classical autoregressive time series model.

Many-to-Many Model

A many-to-many model produces multiple outputs after receiving multiple input values.

Many-to-Many Sequence Prediction Model

Many-to-Many Sequence Prediction Model

As with the many-to-one case, state is accumulated until the first output is created, but in this case multiple time steps are output.

Importantly, the number of input time steps do not have to match the number of output time steps. Think of the input and output time steps operating at different rates.

In the case of time series forecasting, this model would use a sequence of recent observations to make a multi-step forecast.

In a sense, it combines the capabilities of the many-to-one and one-to-many models.

Cardinality from Timesteps (not Features!)

A common point of confusion is to conflate the above examples of sequence mapping models with multiple input and output features.

A sequence may be comprised of single values, one for each time step.

Alternately, a sequence could just as easily represent a vector of multiple observations at the time step. Each item in the vector for a time step may be thought of as its own separate time series. It does not affect the description of the models above.

For example, a model that takes as input one time step of temperature and pressure and predicts one time step of temperature and pressure is a one-to-one model, not a many-to-many model.

Multiple-Feature Sequence Prediction Model

Multiple-Feature Sequence Prediction Model

The model does take two values as input and predicts two values, but there is only a single sequence time step expressed for the input and predicted as output.

The cardinality of the sequence prediction models defined above refers to time steps, not features (e.g. univariate or multivariate sequences).

Two Common Misunderstandings by Practitioners

The confusion of features vs time steps leads to two main misunderstandings when implementing recurrent neural networks by practitioners:

1. Timesteps as Input Features

Observations at previous timesteps are framed as input features to the model.

This is the classical fixed-window-based approach of inputting sequence prediction problems used by multilayer Perceptrons. Instead, the sequence should be fed in one time step at a time.

This confusion may lead you to think you have implemented a many-to-one or many-to-many sequence prediction model when in fact you only have a single vector input for one time step.

2. Timesteps as Output Features

Predictions at multiple future time steps are framed as output features to the model.

This is the classical fixed-window approach of making multi-step predictions used by multilayer Perceptrons and other machine learning algorithms. Instead, the sequence predictions should be generated one time step at a time.

This confusion may lead you to think you have implemented a one-to-many or many-to-many sequence prediction model when in fact you only have a single vector output for one time step (e.g. seq2vec not seq2seq).

Note: framing timesteps as features in sequence prediction problems is a valid strategy, and could lead to improved performance even when using recurrent neural networks (try it!). The important point here is to understand the common pitfalls and not trick yourself when framing your own prediction problems.

Further Reading

This section provides more resources on the topic if you are looking go deeper.

Summary

In this tutorial, you discovered the standard models for sequence prediction with recurrent neural networks.

Specifically, you learned:

  • How sequence prediction problems are modeled with recurrent neural networks.
  • The 4 standard sequence prediction models used by recurrent neural networks.
  • The 2 most common misunderstandings made by beginners when applying sequence prediction models.

Do you have any questions?
Ask your questions in the comments below and I will do my best to answer.

Develop LSTMs for Sequence Prediction Today!

Long Short-Term Memory Networks with Python

Develop Your Own LSTM models in Minutes

…with just a few lines of python code

Discover how in my new Ebook:
Long Short-Term Memory Networks with Python

It provides self-study tutorials on topics like:
CNN LSTMs, Encoder-Decoder LSTMs, generative models, data preparation, making predictions and much more…

Finally Bring LSTM Recurrent Neural Networks to
Your Sequence Predictions Projects

Skip the Academics. Just Results.

Click to learn more.


17 Responses to Gentle Introduction to Models for Sequence Prediction with Recurrent Neural Networks

  1. Raan July 19, 2017 at 4:34 am #

    Thanks for the article. This is very useful. Do you have any examples of forecasting multivariate time series using RNN?

    • Jason Brownlee July 19, 2017 at 8:30 am #

      I should have one on the blog soon, it has been scheduled.

  2. mriazi July 20, 2017 at 10:27 am #

    Hi Jason,

    Thank you very much for your great article and the fabulous blog. I’ve been following you blog
    for a few months now and read most of your articles on RNNs.
    Like you have mentioned above, I’m struggling to correctly model my time-series prediction problem. It’ll be great if you can help me on this.
    I have samples of sensor readings each a vector of 64 timesteps. I would like to use LSTM to learn the structure of the series and predict the next 64 timesteps.
    I think I will need to use a Many-to-Many model to the model learns the input and predicts the output (64 values) based on what it has learned. I’m trying to use LSTM for unsupervised anomaly detection problem. I guess what I’m struggling with is that I want my model to learn the most common structure in my long time series and I’m kind of confused how my input should be.
    Sorry, for the long description.
    Many thanks

    • Jason Brownlee July 21, 2017 at 9:26 am #

      I would recommend modeling it as a many-to-many supervised learning problem.

      Sorry, I don’t have experience using LSTMs for unsupervised problems, I need to do some reading.

  3. Paul August 2, 2017 at 3:22 pm #

    Hi, Jason. I’m always thankful that you posted great examples and posts.
    I have simple question.
    For predicting/forecasting time series data, are Multilayer NN and RNN(LSTM) techniques the best way to forecasting future data?

    Thank you in advance.

    Best,
    Paul

    • Jason Brownlee August 3, 2017 at 6:43 am #

      There is no best way, I would encourage you to evaluate a suite of methods and see what works best for your problem.

  4. Gustavo August 12, 2017 at 5:16 am #

    Secuence learning is the same as Online learning? What are the differences?

    • Jason Brownlee August 12, 2017 at 6:54 am #

      Hi Gustavo,

      No, a sequence is the structure of the data and prediction problem.

      Learning can be online or offline for sequence prediction the same as simpler regression and classification.

      Does that help?

      • Gustavo August 14, 2017 at 10:30 pm #

        Help indeed thanks best regards

  5. hirohi August 21, 2017 at 12:18 pm #

    In the case of Many2Many and One2Many in this post, how do you compute the hidden states at the time step, when there is no input. Specifically, in One2Many, how do you compute “u(1)”, despite of the lack of “X(2)”? I think we can only compute Y(1),Y(2), Y(3) as a vector. If I was wrong, could you tell me why with examples such as image captioning or machine translation?

    • Jason Brownlee August 21, 2017 at 4:23 pm #

      Great question!

      It is common to teach the model with “start seq” and “end seq” inputs at the beginning and end of sequences to kick-off or close-off the sequence input or output.

      I have used this approach myself with image captioning models and translation.

      • hirohi August 22, 2017 at 11:33 am #

        I investigated many2many(encoder-decoder). As you said, we feed “start” to LSTM to compute “u(1)”. My question included “what the input is necessary to compute “u(2)”. As the result of my investigation, we have to feed “y(2)” to compute “u(2)”.

        The below image is more accurate, right?
        http://suriyadeepan.github.io/img/seq2seq/seq2seq1.png

        • Jason Brownlee August 23, 2017 at 6:38 am #

          Yes, that is one way.

          Remember to explore many different framings of the problem to see what works best for your specific data.

          • hirohi August 23, 2017 at 12:30 pm #

            OK, thanks! I’ll try it!

  6. mrresearcher September 6, 2017 at 11:38 pm #

    Im facing a problem of one-to-many sequence prediction, where given a set of input parameters for a program the model should generate values of resources usage as a function of time (CPU, memory etc.). I have some examples from real-world programs and I already tried simple feed-forward networks, but now Im trying to find state-of-the-art solution for one-to-many sequence generating problem. Until now I’ve only found image captioning example, but it is tailored for predicting words instead of real values. Are you aware of any state-of-the-art solutions for generating one-to-many sequences? If you do, I would be grateful for any references. Thanks!

    • Jason Brownlee September 7, 2017 at 12:56 pm #

      Caption generation would provide a good model or starting point for your problem.

      No CNN front end of course, a big MLP perhaps instead.

      Does that help? I’m eager to hear how you go.

Leave a Reply