Gentle Introduction to Models for Sequence Prediction with RNNs

Last Updated on August 25, 2019

Sequence prediction is a problem that involves using historical sequence information to predict the next value or values in the sequence.

The sequence may be symbols like letters in a sentence or real values like those in a time series of prices. Sequence prediction may be easiest to understand in the context of time series forecasting as the problem is already generally understood.

In this post, you will discover the standard sequence prediction models that you can use to frame your own sequence prediction problems.

After reading this post, you will know:

  • How sequence prediction problems are modeled with recurrent neural networks.
  • The 4 standard sequence prediction models used by recurrent neural networks.
  • The 2 most common misunderstandings made by beginners when applying sequence prediction models.

Kick-start your project with my new book Long Short-Term Memory Networks With Python, including step-by-step tutorials and the Python source code files for all examples.

Let’s get started.

Tutorial Overview

This tutorial is divided into 4 parts; they are:

  1. Sequence Prediction with Recurrent Neural Networks
  2. Models for Sequence Prediction
  3. Cardinality from Timesteps not Features
  4. Two Common Misunderstandings by Practitioners

Sequence Prediction with Recurrent Neural Networks

Recurrent Neural Networks, like Long Short-Term Memory (LSTM) networks, are designed for sequence prediction problems.

In fact, at the time of writing, LSTMs achieve state-of-the-art results in challenging sequence prediction problems like neural machine translation (translating English to French).

LSTMs work by learning a function (f(…)) that maps input sequence values (X) onto output sequence values (y).

The learned mapping function is static and may be thought of as a program that takes input variables and uses internal variables. Internal variables are represented by an internal state maintained by the network and built up or accumulated over each value in the input sequence.

… RNNs combine the input vector with their state vector with a fixed (but learned) function to produce a new state vector. This can in programming terms be interpreted as running a fixed program with certain inputs and some internal variables.

— Andrej Karpathy, The Unreasonable Effectiveness of Recurrent Neural Networks, 2015

The static mapping function may be defined with a different number of inputs or outputs, as we will review in the next section.

Need help with LSTMs for Sequence Prediction?

Take my free 7-day email course and discover 6 different LSTM architectures (with code).

Click to sign-up and also get a free PDF Ebook version of the course.

Models for Sequence Prediction

In this section, will review the 4 primary models for sequence prediction.

We will use the following terminology:

  • X: The input sequence value, may be delimited by a time step, e.g. X(1).
  • u: The hidden state value, may be delimited by a time step, e.g. u(1).
  • y: The output sequence value, may be delimited by a time step, e.g. y(1).

One-to-One Model

A one-to-one model produces one output value for each input value.

One-to-One Sequence Prediction Model

One-to-One Sequence Prediction Model

The internal state for the first time step is zero; from that point onward, the internal state is accumulated over the prior time steps.

One-to-One Sequence Prediction Model Over Time

One-to-One Sequence Prediction Model Over Time

In the case of a sequence prediction, this model would produce one time step forecast for each observed time step received as input.

This is a poor use for RNNs as the model has no chance to learn over input or output time steps (e.g. BPTT). If you find implementing this model for sequence prediction, you may intend to be using a many-to-one model instead.

One-to-Many Model

A one-to-many model produces multiple output values for one input value.

One-to-Many Sequence Prediction Model

One-to-Many Sequence Prediction Model

The internal state is accumulated as each value in the output sequence is produced.

This model can be used for image captioning where one image is provided as input and a sequence of words are generated as output.

Many-to-One Model

A many-to-one model produces one output value after receiving multiple input values.

Many-to-One Sequence Prediction Model

Many-to-One Sequence Prediction Model

The internal state is accumulated with each input value before a final output value is produced.

In the case of time series, this model would use a sequence of recent observations to forecast the next time step. This architecture would represent the classical autoregressive time series model.

Many-to-Many Model

A many-to-many model produces multiple outputs after receiving multiple input values.

Many-to-Many Sequence Prediction Model

Many-to-Many Sequence Prediction Model

As with the many-to-one case, state is accumulated until the first output is created, but in this case multiple time steps are output.

Importantly, the number of input time steps do not have to match the number of output time steps. Think of the input and output time steps operating at different rates.

In the case of time series forecasting, this model would use a sequence of recent observations to make a multi-step forecast.

In a sense, it combines the capabilities of the many-to-one and one-to-many models.

Cardinality from Timesteps (not Features!)

A common point of confusion is to conflate the above examples of sequence mapping models with multiple input and output features.

A sequence may be comprised of single values, one for each time step.

Alternately, a sequence could just as easily represent a vector of multiple observations at the time step. Each item in the vector for a time step may be thought of as its own separate time series. It does not affect the description of the models above.

For example, a model that takes as input one time step of temperature and pressure and predicts one time step of temperature and pressure is a one-to-one model, not a many-to-many model.

Multiple-Feature Sequence Prediction Model

Multiple-Feature Sequence Prediction Model

The model does take two values as input and predicts two values, but there is only a single sequence time step expressed for the input and predicted as output.

The cardinality of the sequence prediction models defined above refers to time steps, not features (e.g. univariate or multivariate sequences).

Two Common Misunderstandings by Practitioners

The confusion of features vs time steps leads to two main misunderstandings when implementing recurrent neural networks by practitioners:

1. Timesteps as Input Features

Observations at previous timesteps are framed as input features to the model.

This is the classical fixed-window-based approach of inputting sequence prediction problems used by multilayer Perceptrons. Instead, the sequence should be fed in one time step at a time.

This confusion may lead you to think you have implemented a many-to-one or many-to-many sequence prediction model when in fact you only have a single vector input for one time step.

2. Timesteps as Output Features

Predictions at multiple future time steps are framed as output features to the model.

This is the classical fixed-window approach of making multi-step predictions used by multilayer Perceptrons and other machine learning algorithms. Instead, the sequence predictions should be generated one time step at a time.

This confusion may lead you to think you have implemented a one-to-many or many-to-many sequence prediction model when in fact you only have a single vector output for one time step (e.g. seq2vec not seq2seq).

Note: framing timesteps as features in sequence prediction problems is a valid strategy, and could lead to improved performance even when using recurrent neural networks (try it!). The important point here is to understand the common pitfalls and not trick yourself when framing your own prediction problems.

Further Reading

This section provides more resources on the topic if you are looking go deeper.


In this tutorial, you discovered the standard models for sequence prediction with recurrent neural networks.

Specifically, you learned:

  • How sequence prediction problems are modeled with recurrent neural networks.
  • The 4 standard sequence prediction models used by recurrent neural networks.
  • The 2 most common misunderstandings made by beginners when applying sequence prediction models.

Do you have any questions?
Ask your questions in the comments below and I will do my best to answer.

Develop LSTMs for Sequence Prediction Today!

Long Short-Term Memory Networks with Python

Develop Your Own LSTM models in Minutes

...with just a few lines of python code

Discover how in my new Ebook:
Long Short-Term Memory Networks with Python

It provides self-study tutorials on topics like:
CNN LSTMs, Encoder-Decoder LSTMs, generative models, data preparation, making predictions and much more...

Finally Bring LSTM Recurrent Neural Networks to
Your Sequence Predictions Projects

Skip the Academics. Just Results.

See What's Inside

68 Responses to Gentle Introduction to Models for Sequence Prediction with RNNs

  1. Raan July 19, 2017 at 4:34 am #

    Thanks for the article. This is very useful. Do you have any examples of forecasting multivariate time series using RNN?

    • Jason Brownlee July 19, 2017 at 8:30 am #

      I should have one on the blog soon, it has been scheduled.

  2. mriazi July 20, 2017 at 10:27 am #

    Hi Jason,

    Thank you very much for your great article and the fabulous blog. I’ve been following you blog
    for a few months now and read most of your articles on RNNs.
    Like you have mentioned above, I’m struggling to correctly model my time-series prediction problem. It’ll be great if you can help me on this.
    I have samples of sensor readings each a vector of 64 timesteps. I would like to use LSTM to learn the structure of the series and predict the next 64 timesteps.
    I think I will need to use a Many-to-Many model to the model learns the input and predicts the output (64 values) based on what it has learned. I’m trying to use LSTM for unsupervised anomaly detection problem. I guess what I’m struggling with is that I want my model to learn the most common structure in my long time series and I’m kind of confused how my input should be.
    Sorry, for the long description.
    Many thanks

    • Jason Brownlee July 21, 2017 at 9:26 am #

      I would recommend modeling it as a many-to-many supervised learning problem.

      Sorry, I don’t have experience using LSTMs for unsupervised problems, I need to do some reading.

  3. Paul August 2, 2017 at 3:22 pm #

    Hi, Jason. I’m always thankful that you posted great examples and posts.
    I have simple question.
    For predicting/forecasting time series data, are Multilayer NN and RNN(LSTM) techniques the best way to forecasting future data?

    Thank you in advance.


    • Jason Brownlee August 3, 2017 at 6:43 am #

      There is no best way, I would encourage you to evaluate a suite of methods and see what works best for your problem.

  4. Gustavo August 12, 2017 at 5:16 am #

    Secuence learning is the same as Online learning? What are the differences?

    • Jason Brownlee August 12, 2017 at 6:54 am #

      Hi Gustavo,

      No, a sequence is the structure of the data and prediction problem.

      Learning can be online or offline for sequence prediction the same as simpler regression and classification.

      Does that help?

      • Gustavo August 14, 2017 at 10:30 pm #

        Help indeed thanks best regards

  5. hirohi August 21, 2017 at 12:18 pm #

    In the case of Many2Many and One2Many in this post, how do you compute the hidden states at the time step, when there is no input. Specifically, in One2Many, how do you compute “u(1)”, despite of the lack of “X(2)”? I think we can only compute Y(1),Y(2), Y(3) as a vector. If I was wrong, could you tell me why with examples such as image captioning or machine translation?

    • Jason Brownlee August 21, 2017 at 4:23 pm #

      Great question!

      It is common to teach the model with “start seq” and “end seq” inputs at the beginning and end of sequences to kick-off or close-off the sequence input or output.

      I have used this approach myself with image captioning models and translation.

      • hirohi August 22, 2017 at 11:33 am #

        I investigated many2many(encoder-decoder). As you said, we feed “start” to LSTM to compute “u(1)”. My question included “what the input is necessary to compute “u(2)”. As the result of my investigation, we have to feed “y(2)” to compute “u(2)”.

        The below image is more accurate, right?

        • Jason Brownlee August 23, 2017 at 6:38 am #

          Yes, that is one way.

          Remember to explore many different framings of the problem to see what works best for your specific data.

          • hirohi August 23, 2017 at 12:30 pm #

            OK, thanks! I’ll try it!

  6. mrresearcher September 6, 2017 at 11:38 pm #

    Im facing a problem of one-to-many sequence prediction, where given a set of input parameters for a program the model should generate values of resources usage as a function of time (CPU, memory etc.). I have some examples from real-world programs and I already tried simple feed-forward networks, but now Im trying to find state-of-the-art solution for one-to-many sequence generating problem. Until now I’ve only found image captioning example, but it is tailored for predicting words instead of real values. Are you aware of any state-of-the-art solutions for generating one-to-many sequences? If you do, I would be grateful for any references. Thanks!

    • Jason Brownlee September 7, 2017 at 12:56 pm #

      Caption generation would provide a good model or starting point for your problem.

      No CNN front end of course, a big MLP perhaps instead.

      Does that help? I’m eager to hear how you go.

  7. Sama November 30, 2017 at 10:26 am #

    Dear Dr, Please I have an important question. Can RNN accumulate knowledge, for example can i contentiously train the network to built bigger knowledge or it is trained once, and if it can contentiously learn . how i can do that

    • Jason Brownlee November 30, 2017 at 2:46 pm #

      Good question.

      You can update the model after it is trained.

  8. Sharan December 29, 2017 at 2:54 pm #


    I am trying to apply ML for a specific problem I want to solve.

    Below is the problem statement:

    I have a system that is made of many functional blocks. These communicate with each other through events. When the system runs, the log of these events history is generated.

    From past experience, I know what the interesting sequences are. I would now like to parse through these event log and see if any of the sequences fall in the interesting category that is known a-priori. One thing to note is that time duration can vary while sequence is intact.
    FOr example, event1 t1 event2 t2 event3. Between example and actual sequence, the values of t1, t2 can vary but sequence of events (event1 -> event2 -> event3) remain.

    Manually doing this is tedious as there can be millions of such events when the system runs.

    Can you suggest what is the best approach to solve this issue>

  9. Arnold Loaiza April 9, 2018 at 11:02 am #

    Hello Jason, I have a query about a sequence prediction problem where an author used lstm with dense layer for the potential of this combination.
    The problem is to use 20 units of time from the past to predict T units of time. For example, predict the sequence of the next 5 units of time. So each sample has 20 units of time where each unit of time is a vector with 10 characteristics.

    X = ( samples, 20, 10)
    Y = (50)

    As you can see the respective “Y” for each sample is a vector of 50 units, which represents the units of time to predict, a time with its respective vector of 10 characteristics concatenated with the remaining 4 times, in total 50. In keras it would be presented in this way:

    model= Sequential()
    model.add(LSTM(500, input_shape=(20, 10)))
    model.add(Dense(10*5)) # 5 times with vector of 10 characteristics each time.
    model.compile(optimizer= ‘rmsprop’, loss=’mse’)

    According to what I read in this post, it would be a form of a vector, because it is sending its last internal state H as an output and that is being used as a characteristic vector that trains with the desired outputs of the following 5 times. The amazing thing is that this architecture learns, it is not the best but it gets very close, it gains to methods like SAE, ANN. Finally I tested this with my dataset with different output sequences for 10 times, 15 time2, 20 times in the future, just by increasing the number of output neurons desired, it’s like magic.

    What would your opinion be? Is it a Seq to Vector? Can it be done in a more effective way ?. Thank you very much.

  10. Shubhashis June 1, 2018 at 12:04 am #

    Hello Jason,

    I’m confused with the figure of “One-to-One Sequence Prediction Model Over Time”, and “Many-to-Many Sequence Prediction Model”.

    For one to one model, here is a Keras code snippet –

    model = Sequential()
    model.add(LSTM(….., input_shape=(1, ….)))

    Now, according to the figure of “One-to-One Sequence Prediction Model Over Time”, I’m assuming the Keras implementation will be –

    model = Sequential()
    model.add(LSTM(….., input_shape=(time_steps, ….), return_sequences=True))

    Now this seems oddly familiar to “Many to Many Sequence Prediction”, where the number of input features are equal to number of output features.

    Please let me know where I misunderstood. Also, for the figure, “One-to-One Sequence Prediction Model Over Time”, what would be the correct implementation with Keras?


    Btw, Great article on the Time Series prediction 🙂

    • Jason Brownlee June 1, 2018 at 8:21 am #

      The “over time” is just the application of the same model to each time step. No difference to the model, just the data.

      • Shubhashis June 1, 2018 at 1:30 pm #

        So, if there are multiple time steps for a one-to-one model, you are saying that the model would be the same, that is, the model would be –

        model = Sequential()
        model.add(LSTM(….., input_shape=(1, ….)))

        But, this means that there is only 1 time step. How multiple time steps would fit into this?

        • Jason Brownlee June 1, 2018 at 2:49 pm #

          I see, I believe you are describing a many to many model.

          • Shubhashis June 2, 2018 at 12:23 am #

            Ok, if so, then I think, the figure that you’ve shown for “One-to-One Sequence Prediction Model Over Time” should be a “Many to Many” model instead.

            Because the only logical Keras implementation I could think for that is –

            model = Sequential()
            model.add(LSTM(….., input_shape=(n, ….), return_sequence=True))

            Which does not seem like a “one-to-one” model. Rather a “many-to-many” instead.

            Please let me know if this is clear.

            I can mail you in detail if you think the question that I’m asking is not sufficient to describe the problem.

          • Jason Brownlee June 2, 2018 at 6:32 am #

            Your code is a many to many, not one to one.

  11. Joe wang June 16, 2018 at 3:11 am #

    Hi Jason,

    Thank you for the blog and it is very helpful. I have a question regarding many to one structure, when we try to use many to one model to do the predication, we also need to have an sequence as the input (contain same number of time steps as training data), do I understand correctly? Or could we just feed the feature at one time stamp to get the predictions?

    • Jason Brownlee June 16, 2018 at 7:31 am #

      It means multiple timesptes as input then multiple time steps as output.

      It could be actual time series or words in a sentence or other obs where they are ordered by time.

  12. Victor September 11, 2018 at 11:15 pm #

    Hi Jason

    Thank you very much for your wonderful article.

    I am pretty new in the field and I am sure I have not yet fully understood.

    If I want to use the power of NN to predict the temperature for example, using the time sequence temperature, pressure, humidity n etc at each time frame as input, what network is it? is it best to use LSTM RNN?

    The architecture of the model that I am considering is.

    1. time sequence value of temperature, T[], which produces a temporary output O1 at time t
    2. time sequence value of pressure, P[], which produces a temporary output O2 at time t
    3. time sequence value of humidity, H[], which produces a temporary output O3 at time t
    4. finally, O1, O2, O3 will be used to generate the final output at time t, which is the model prediction of the temperature.

    Do I actually need to have 4 independent NN? or only 1 which takes all the time sequence features?

    And do I really need RNN? i don’t think I need to feed my prediction back into the network, as I can keep feeding the latest measurement as input.

    Much appreciate for your time to answer my question.

  13. Tunay October 3, 2018 at 7:02 pm #

    Hi Jason,

    can you please suggest some reading on “strategies on framing timesteps as features in sequence prediction problems” ?

    i am having hard time finding relevant literature 🙂

    • Jason Brownlee October 4, 2018 at 6:14 am #

      No literature needed, it’s a simple change in code from using past observations as time steps to instead using them as features on a time step.

      • Tunay October 7, 2018 at 12:05 pm #

        Oh I see. I actually wanted to use the observations at the timesteps only as output features, without using RNNs.

        To elaborate on that; all the input features are for t=0 and these inputs are different kind of data than the output feature. There is only one kind of output feature and it varies over time.
        So I have:
        X_1, X_2, … , X_n for t=0 and
        y_t=0, y_t=1, …, y_t=m

        I thought of employing one-to-many RNN (I am not sure if this is a valid case for this!?)
        but then I thought maybe I can also frame the different timesteps as different output features and develop a simple feedforward network with backpropagation without using RNN at all.

        Do you think this is a valid strategy?

  14. Javier June 17, 2019 at 5:09 pm #

    Great article and great site Jason. Thanks to you I’m learning a lot in ML.

    I’m currently facing a sequence prediction problem and I have a doubt.

    Is it possible to improve the sequence prediction using some other data apart from the sequence? For example predicting the next symptoms on a patient at a hospital not only using the sequence from other user but the patient age, sex, etc.

    The aproaches I’ve found always base their prediction only on the previous sequences, but what if we have some other data that we think can improve the results?

  15. skyrim4ever July 10, 2019 at 3:18 pm #

    I am little confused what I have learned so far so I would like to ask for clarification. This is my last question, will not be spamming you questions anymore after this!

    Suppose we have time series [10, 20, 30, 40, 50, 60, 70, 80, 90]. For example, amount of profits for each day, so this is univariate time series problem. We want to predict next day profits (one-step-forecasting) so now I turn this into supervised time series prediction problem.

    In this 1st case, it is about one-to-one modeling since we have one input for one output. Training phase would look like something like this:
    [t] -> [t+1]
    [10] -> [20]
    [20] -> [30]
    [30] -> [40]

    [80] -> [90]

    In this 2nd case, it is many-to-one according to your blog here. One input series to one output.
    [t-2, t-1, t] -> [t+1]
    [10, 20, 30] -> [40]
    [40, 50, 60] -> [70]

    In this 3rd case, is it many-to-one since we still have one input series for one output?
    [t-2, t-1, t] -> [t+1]
    [10, 20, 30] -> [40]
    [20, 30, 40] -> [50]
    [30, 40, 50] -> [60]

    [60, 70, 80] -> [90]

    So far I have built LSTM forecasting model like in 1st case, result is OK. But I am thinking to improve it building LSTM forecasting model like in 3rd case, do you think this kind of modeling make sense? 2nd case seems harder for model to learn since it has less training examples.

    • Jason Brownlee July 11, 2019 at 9:44 am #

      Yes, that is many to one, many time steps to one time step.

      Yes, it could result in better performance. Also try other models like CNNs and more for comparison.

  16. Rao July 26, 2019 at 4:43 am #

    Great post! I am currently working on Predictive maintenance of devices using analysis of log files. The goal is to sort of cluster the log messages into different workflows (or activities) by analysing the patterns of the log messages and to predict an error whenever it deviates from such activities. The first step was to cluster the content of messages using BOW approach and assign them an ID so we now have sequence of IDs instead of sequence of messages. I am currently exploring HMMs and neural network models to find patterns in the sequence of IDs.

    Any insight on forming or recognizing cluster of patterns to form workflows or to improve forming of message IDs would be highly appreciated.

    • Jason Brownlee July 26, 2019 at 8:32 am #

      Sounds like a great project.

      Sorry, I don’t have any posts on clustering, I hope to get to it in the future.

  17. Asad Khan October 12, 2019 at 8:26 pm #

    Can we use this for Genomics sequences?

  18. Ayman October 17, 2019 at 6:19 pm #

    Let’s say that I have static features associated with every sequence. (i.e. I have temperature/second(sequence) and elevation (static)). How can I incorporate my static features into such models ?

    Thanks Jason 🙂

    • Jason Brownlee October 18, 2019 at 5:48 am #

      Perhaps a time series with a fixed value. Perhaps as a separate input to the model, e.g. multi-input model.

  19. Muthu December 18, 2019 at 6:20 am #

    Hi Jason, Thank you so much for this article.

    I have input data of n samples, for one variable – for 1000 timesteps. Hence my Input is of the form [ 1 * 1000 ] and my output is a single number – let say 85.

    1 ) Is this Many to One problem ?

    2 ) which LSTM model will be better ?

    • Jason Brownlee December 18, 2019 at 1:26 pm #

      Probably many to one.

      Perhaps start with some of these models:

      • Firas Obeid August 28, 2020 at 4:47 am #

        So is this the same case as a tensorsliced sequence of shape lets say :
        ((32, 400), (32, 400))
        where 32 is a batch size and the ((, 400), (, 400)) is sequence length ((, IN(Xi), (, OUT(Yi))?


        • Firas Obeid August 28, 2020 at 5:21 am #

          Adding to that, the model is updating its hidden state and predicting a probability distribution in each X 399 times.

        • Jason Brownlee August 28, 2020 at 6:56 am #

          I don’t follow, sorry. What is a tensorslice sequence?

          • Firas Obeid August 28, 2020 at 9:03 am #

            Sorry I didn’t elaborate;
            I used the,y)) to convert a my sequence to the following format:

            my X was a sequence of length 400 for each X and my why Y is the X shifted by one index:
            X = []
            y = []
            for i in sequences:
            X.shape, y.shape –> ((17433, 400), (17433, 400))

          • Jason Brownlee August 28, 2020 at 10:21 am #

            Sorry, I am not familiar with that function, I cannot give you good advice about it.

            Perhaps try posting your question to stackoverflow.

  20. Francis February 4, 2021 at 1:35 am #

    Supposing I want to predict the next sequence of letters in this table:

    A B J Q E R
    T W U O Z X
    R O P J K L
    E D F V B T
    Q S D W E T
    V B N K M O
    E Z T V B T
    Q S A Q E T
    R Q A J S L
    E D X V Y T

    1) How many inputs must be used for the neural network on this?
    2) How many hidden layers must be used?
    3) Must I have a prepared array of text character values initialized for the program?

  21. Francis February 4, 2021 at 1:37 am #

    By the way, the question I posed earlier is a “Many to Many” problem – that is, for the table of letter sequences, 5 possible outcomes from it are expected…

  22. Michail G. June 18, 2021 at 5:31 pm #

    Hi Jason,

    Does it makes sense to use an Encoder-Decoder model, for a many-to-one forecasting scenario?

    I am wondering what is the role of decoder in the one-step prediction case, since I think that parsing the output of encoder through dense layer could be similar.

    • Jason Brownlee June 19, 2021 at 5:48 am #

      Perhaps try it and compare results to other methods, use it if it gives better results.

  23. Raul Pereira July 2, 2021 at 5:05 pm #

    Hi Jason,

    Thank you for all the information and insights you share through your splendid blog

    I had one problem, and would like to know how we could use neural networks to solve it.

    It is a sequence of lists 2 sequence problem, where the input is a sequence of lists, each list has a fixed no of integers (eg. [3,4,2,7]) so there are many such lists in sequence (think of it like a 2D matrix). The output is a label (number) given to each of the list, for example if the sum of the integers in a list is higest among all the lists than the label for that is 1, if the sum of the integers in a list is lowest among all the lists than the label for that list is 0, if the sum is somewhere in between the highest and lowest then its a fraction between 0 and 1, depending on the sum. The higher the sum, more closer to 1 is the label of the list. Think of the label as a sorting number (can lie anywhere within 0 and 1, 0 means lowest sum while 1 means highest sum).

    Do you have any examples or information or any insights on how to model the above problem using Neural networks i.e to train a neural network to sort a sequence of lists based on the sum of integers in each list?

    Thank you so much Jason!

Leave a Reply