[New Book] Click to get The Beginner's Guide to Data Science!
Use the offer code 20offearlybird to get 20% off. Hurry, sale ends soon!

5 Examples of Simple Sequence Prediction Problems for LSTMs

Sequence prediction is different from traditional classification and regression problems.

It requires that you take the order of observations into account and that you use models like Long Short-Term Memory (LSTM) recurrent neural networks that have memory and that can learn any temporal dependence between observations.

It is critical to apply LSTMs to learn how to use them on sequence prediction problems, and for that, you need a suite of well-defined problems that allow you to focus on different problem types and framings. It is critical so that you can build up your intuition for how sequence prediction problems are different and how sophisticated models like LSTMs can be used to address them.

In this tutorial, you will discover a suite of 5 narrowly defined and scalable sequence prediction problems that you can use to apply and learn more about LSTM recurrent neural networks.

After completing this tutorial, you will know:

  • Simple memorization tasks to test the learned memory capability of LSTMs.
  • Simple echo tasks to test the learned temporal dependence capability of LSTMs.
  • Simple arithmetic tasks to test the interpretation capability of LSTMs.

Kick-start your project with my new book Long Short-Term Memory Networks With Python, including step-by-step tutorials and the Python source code files for all examples.

Let’s get started.

5 Examples of Simple Sequence Prediction Problems for Learning LSTM Recurrent Neural Networks

5 Examples of Simple Sequence Prediction Problems for Learning LSTM Recurrent Neural Networks
Photo by Geraint Otis Warlow, some rights reserved.

Tutorial Overview

This tutorial is divided into 5 sections; they are:

  1. Sequence Learning Problem
  2. Value Memorization
  3. Echo Random Integer
  4. Echo Random Subsequences
  5. Sequence Classification

Properties of Problems

The sequence problems were designed with a few properties in mind:

  • Narrow. To focus on one aspect of the sequence prediction, such as memory or function approximation.
  • Scalable. To be made more or less difficult along the chosen narrow focus.
  • Reframed. Two or more framings of the each problem are presented to support the exploration of different algorithm learning capabilities.

I tried to provide a mixture of narrow focuses, problem difficulties, and required network architectures.

If you have ideas for further extensions or similarly carefully designed problems, please let me know in the comments below.

Need help with LSTMs for Sequence Prediction?

Take my free 7-day email course and discover 6 different LSTM architectures (with code).

Click to sign-up and also get a free PDF Ebook version of the course.

1. Sequence Learning Problem

In this problem, a sequence of contiguous real values between 0.0 and 1.0 are generated. Given one or more time steps of past values, the model must predict the next item in the sequence.

We can generate this sequence directly, as follows:

Running this example prints the generated sequence:

This could be framed as a memorization challenge where given the observation at the previous time step, the model must predict the next value:

The network could memorize the input-output pairs, which is quite boring, but would demonstrate the function approximation capability of the network.

The problem could be framed as randomly chosen contiguous subsequences as input time steps and the next value in the sequence as output.

This would require the network to learn either to add a fixed value to the last seen observation or to memorize all possible subsequences of the generated problem.

This framing of the problem would be modeled as a many-to-one sequence prediction problem.

This is an easy problem that tests primitive features of sequence learning. This problem could be solved by a multilayer Perceptron network.

2. Value Memorization

The problem is to remember the first value in the sequence and to repeat it at the end of the sequence.

This problem is based on “Experiment 2” used to demonstrate LSTMs in the 1997 paper Long Short Term Memory.

This can be framed as a one-step prediction problem.

Given one value in the sequence, the model must predict the next value in the sequence. For example, given a value of “0” as an input, the model must predict the value “1”.

Consider the following two sequences of 5 integers:

The Python code will generate two sequences of arbitrary length. You could generalize it further if you wish.

Running the example generates and prints the above two sequences.

The integers could be normalized, or more preferably one hot encoded.

The patterns introduce a wrinkle in that there is conflicting information between the two sequences and that the model must know the context of each one-step prediction (e.g. the sequence it is currently predicting) in order to correctly predict each full sequence.

We can see that the first value of the sequence is repeated as the last value of the sequence. This is the indicator that provides context to the model as to which sequence it is working on.

The conflict is the transition from the second to last items in each sequence. In sequence one, a “2” is given as an input and a “3” must be predicted, whereas in sequence two, a “2” is given as input and a “4” must be predicted.

This wrinkle is important to prevent the model from memorizing each single-step input-output pair of values in each sequence, as a sequence unaware model may be inclined to do.

This framing would be modeled as a one-to-one sequence prediction problem.

This is a problem that a multilayer Perceptron and other non-recurrent neural networks cannot learn. The first value in the sequence must be remembered across multiple samples.

This problem could be framed as providing the entire sequence except the last value as input time steps and predicting the final value.

Each time step is still shown to the network one at a time, but the network must remember the value at the first time step. The difference is, the network can better learn the difference between the sequence, and between long sequences via backpropagation through time.

This framing of the problem would be modeled as a many-to-one sequence prediction problem.

Again, this problem could not be learned by a multilayer Perceptron.

3. Echo Random Integer

In this problem, random sequences of integers are generated. The model must remember an integer at a specific lag time and echo it at the end of the sequence.

For example, a random sequence of 10 integers may be:

The problem may be framed as echoing the value at the 5th time step, in this case 9.

The code below will generate random sequences of integers.

Running the example will generate and print a random sequence, such as:

The integers can be normalized, but more preferably a one hot encoding can be used.

A simple framing of this problem is to echo the current input value.

For example:

This trivial problem can easily be solved by a multilayer Perceptron and could be used for calibration or diagnostics of a test harness.

A more challenging framing of the problem is to echo the value at the previous time step.

For example:

This is a problem that cannot be solved by a multilayer Perceptron.

The index to echo can be pushed further back in time, putting more demand on the LSTMs memory.

Unlike the “Value Memorization” problem above, a new sequence would be generated each training epoch. This would require that the model learn a generalization echo solution rather than memorize a specific sequence or sequences of random numbers.

In both cases, the problem would be modeled as a many-to-one sequence prediction problem.

4. Echo Random Subsequences

This problem also involves the generation of random sequences of integers.

Instead of echoing a single previous time step as in the previous problem, this problem requires the model to remember and output a partial sub-sequence of the input sequence.

The simplest framing would be the echo problem from the previous section. Instead, we will focus on a sequence output where the simplest framing is for the model to remember and output the whole input sequence.

For example:

This could be modeled as a many-to-one sequence prediction problem where the output sequence is output directly at the end of the last value in the input sequence.

This can also be modeled as the network outputting one value for each input time step, e.g. a one-to-one model.

A more challenging framing is to output a partial contiguous subsequence of the input sequence.

For example:

This is more challenging because the number of inputs does not match the number of outputs. A many-to-many model of this problem would require a more advanced architecture such as the encoder-decoder LSTM.

Again, a one hot encoding would be preferred, although the problem could be modeled as normalized integer values.

5. Sequence Classification

The problem is defined as a sequence of random values between 0 and 1. This sequence is taken as input for the problem with each number provided one per timestep.

A binary label (0 or 1) is associated with each input. The output values are all 0. Once the cumulative sum of the input values in the sequence exceeds a threshold, then the output value flips from 0 to 1.

A threshold of 1/4 the sequence length is used.

For example, below is a sequence of 10 input timesteps (X):

The corresponding classification output (y) would be:

We can implement this in Python.

Running the example generates a random input sequence and calculates the corresponding output sequence of binary values.

This is a sequence classification problem that can be modeled as one-to-one. State is required to interpret past time steps to correctly predict when the output sequence flips from 0 to 1.

Further Reading

This section provides more resources on the topic if you are looking go deeper.

Summary

In this tutorial, you discovered a suite of carefully designed contrived sequence prediction problems that you can use to explore the learning and memory capabilities of LSTM recurrent neural networks.

Specifically, you learned:

  • Simple memorization tasks to test the learned memory capability of LSTMs.
  • Simple echo tasks to test learned temporal dependence capability of LSTMs.
  • Simple arithmetic tasks to test the interpretation capability of LSTMs.

Do you have any questions?
Ask your questions in the comments below and I will do my best to answer.

Develop LSTMs for Sequence Prediction Today!

Long Short-Term Memory Networks with Python

Develop Your Own LSTM models in Minutes

...with just a few lines of python code

Discover how in my new Ebook:
Long Short-Term Memory Networks with Python

It provides self-study tutorials on topics like:
CNN LSTMs, Encoder-Decoder LSTMs, generative models, data preparation, making predictions and much more...

Finally Bring LSTM Recurrent Neural Networks to
Your Sequence Predictions Projects

Skip the Academics. Just Results.

See What's Inside

28 Responses to 5 Examples of Simple Sequence Prediction Problems for LSTMs

  1. Avatar
    Shantam August 1, 2017 at 1:44 am #

    Thanks a lot for your valuable insights on LSTMs. Your blogs have been a great learning platform.

    I have been lately trying different architectures using LSTMs for time series forecasting.
    In keras the default shape of input tensor is given by [batch size, timesteps, features].
    do you think the lags/percent changes over lags should be passed as features or as timesteps while reshaping the input tensor.

    e.g : Assuming we are using lags/percent changes over lags as features for the past 4 days(assuming no other features are discovered)
    for 5 inputs and 1 output
    should it be modeled as [1,5] or [5,1].

    the input shape of the tensor should be [batch size, 1, 5(current day’s value :past 4 day’s value)] or
    [batch size, 5, 1].

    At a higher level, due you think LSTMs can learn better across time steps or over across features within the time-steps.

    Thanks

    • Avatar
      Jason Brownlee August 1, 2017 at 8:02 am #

      Great question. A good general answer is to brainstorm and then try/bake-off everything in terms of model skill.

      In terms of normative use, I would encourage you to treat one sample as one series of many time steps, with one or more features at each time step.

      Also, I have found LSTMs to be not super great at autoregression tasks. Please baseline performance with a well tuned MLP.

      Does that help?

  2. Avatar
    Shantam August 3, 2017 at 7:47 am #

    Thanks for your feedback. I have been testing multiple architecture settings(still working). I wish I had more features.

    LSTMs and other time series methods(ARIMA,ETS, HW) are working great for a many to one input output setting. However they have failed miserably for long term forecasts.

    I am eagerly waiting for your blog on Bayesian hyper-parameter optimization.
    (P.S. I have more than a 1000 nets to optimize)
    Thanks a lot ! 🙂

    • Avatar
      Jason Brownlee August 4, 2017 at 6:43 am #

      Wow, that is a lot of nets!

      Error accumulates over longer forecast periods. The problem is hard.

  3. Avatar
    Forw September 18, 2017 at 7:11 pm #

    I have to implement an ANN algorithm that produces an estimate of unknown state variables using their uncertainties. Basically, it should be divided in 2 steps: in the Prediction one an estimate of the current state is created based on the state in the previous time step. In the update one the measurement information from the current time step is used to produce a new accurate estimate of the state. So, if I consider my system as a black-box what I have is:

    Input U = Control vector, that indicates the magnitude of the control system’s on the situation
    Input Z = Measurement vector, it contains the real-world measurement we received in a time step
    Output Xn = Newest estimate of the current “true” state
    Output Pn = Newest estimate of the average error for each part of the state The algorithm that must be implemented must rely on a RNN architecture, more specific an LSTM. What I am struggling is how to correlate these inputs/outputs that I have to the one of an LSTM structure. How can I feed the network making in compliant to my state estimation problem?

    Thanks for any kind of help!

  4. Avatar
    Fengbo April 19, 2018 at 10:08 am #

    Thanks for the great resources on your blogs.
    Here I have a problem that I don’t know how I could use neural network to solve it. The problem is : suppose there is a dictionary containing many single words. I have many meaningful phrases which is the combination of the words from the dictionary. The question is when given a set of words (from the dictionary), how could I train a neural network to combine those words (adjust their sequence) and generate meaningful phrases? How to model the input and output?
    I will be really appreciated if any suggestions could be provided.

  5. Avatar
    pip August 25, 2018 at 6:42 am #

    I am working with data that requires classifying if a patient will develop cancer or not in the future, based on medical tests done over time. The tests have a sequential relationship. A, then B, then C, etc. For example:

    | Patient ID | Test ID | RBC Count | WBC Count | Label

    | 1 | A | 4.2 | 7000 | 0

    | 1 | B | 5.3 | 12000 | 0

    | 1 | C | 2.4 | 15000 | 1

    | 2 | A | 7.6 | 8000 | 0

    | 2 | A | 7.4 | 7500 | 0

    Each point is not taken at a regular time interval, so this may not be considered a time-series data. I have tried to aggregate features and use ensemble methods like Random Forests. Can I apply RNN? If so, how?

  6. Avatar
    greentec August 2, 2019 at 12:48 am #

    Hi. I always see your good writing.
    I think there is a slight typo in section 2 example. The first sequence is [3,1,2,3,3] and the second sequence is [4,1,2,3,4]. The inference example 1 -> 2, 2 -> 3 in the first sequence is correct, but the second sequence is the same as 1 -> 2, 2 -> 3. If the goal is to show the difference between the first sequence and the second sequence, I think it is a correct example to compare 2-> 3, 3-> 3 of the first sequence and 2-> 3, 3-> 4 of the second sequence. I may have misunderstood, but I’ll leave a comment if it helps.

    • Avatar
      Jason Brownlee August 2, 2019 at 6:53 am #

      I believe it is correct.

      Each example has 4 input time steps for 1 sample that must output the first value in the sequence, a “3” or “4”.

      • Avatar
        greentec August 4, 2019 at 8:37 pm #

        Oh, I missed this sentence. “The conflict is the transition from the second to last items in each sequence.” I thought it was a problem to predict the very next step. Thank you for your reply.

  7. Avatar
    Augusto September 20, 2019 at 10:06 am #

    Jason,

    Do you have an example of the “contiguous subsequences” or many to one model? This sequence:

    X (timesteps), y
    0.4, 0.5, 0.6, 0.7
    0.0, 0.2, 0.3, 0.4
    0.3, 0.4, 0.5, 0.6

    Thanks in advance

  8. Avatar
    Saikat Chakraborty November 9, 2019 at 11:48 pm #

    Hi ! its a great information on LSTM. I am working on pattern recognition problem of a quasi-periodic signal. Whether ANN or LSTM would be beneficial for this case? It is known that ANN does not have any memorizing concept in its model which basic LSTM have. But, recognition of pattern does actually needs any “memorizing concept” or simple learning through training is enough for this case?

  9. Avatar
    jason zhang November 25, 2019 at 8:21 pm #

    Hi,Thanks for the great resources! I have a problem,I think maybe it’s a sequence prediction question,but i could not to connect it to above examples.

    The question is I have a mixed text , such as a contract with a title.I want to segment it to blocks.
    the train dataset:
    x: mixed contract text
    y: title,chapter1,chaper2,……,chapterN

    the test dataset:
    x: mixed contract text.
    predict y: title, chapter1, chapter2,……,chapterN.

    Could you help me to solve it?

    • Avatar
      Jason Brownlee November 26, 2019 at 6:03 am #

      Perhaps use a multi-input model, one for the title, one for the content?

      • Avatar
        jason zhang November 26, 2019 at 2:05 pm #

        Thanks for your reply . but when I do the predict , I only have one mixed contract text , I couldn’t split it to title,content as multi-input。

        • Avatar
          Jason Brownlee November 26, 2019 at 4:09 pm #

          I see. Perhaps model the problem based on the data you will have and will require at prediction time.

  10. Avatar
    jason zhang November 26, 2019 at 6:08 pm #

    thanks for your help!
    I find a method to resolve this problem in medium.
    https://medium.com/illuin/https-medium-com-illuin-nested-bi-lstm-for-supervised-document-parsing-1e01d0330920.
    the problem is about text segment. there are some papers focus on this problem.
    I will try it. Thanks again!

  11. Avatar
    Dan Wellisch January 15, 2020 at 4:07 pm #

    Hi Jason:

    I have a problem where I am trying to predict whether a stock is going to end up “in the money” or not. So, in my case, it is selling a put option at a strike price, e.g. 10 and the price of the stock is 12 when I sell the option. I want to use a handful of parameters as features. And then my dependent variable is.: Yes or No (in the money or not)

    For training data, I have historical option prices, greeks (like delta, theta), implied volatility, stock prices,etc. If I try to predict “in the money” or “not”, then I suppose I could look at it as a sequence of 1s and 0s starting from the Monday I sell the option to Tues, Tues to Wed, Wed. to Thurs, and Thurs to Friday where Friday is the expiration date (assuming weekly options).

    Another way is to look at the closing prices on each day as part of the sequences.

    And another way to look at the sequence is: M,T,W,Th as the first 4 values of the sequence while trying to predict the Friday value (stock price).

    Are LTSMs good models for this type of thing? If so, what can you advise more specifically about the model characteristics ? Any readings come to mind as well?

Leave a Reply