The encoder-decoder model provides a pattern for using recurrent neural networks to address challenging sequence-to-sequence prediction problems such as machine translation.
Encoder-decoder models can be developed in the Keras Python deep learning library and an example of a neural machine translation system developed with this model has been described on the Keras blog, with sample code distributed with the Keras project.
This example can provide the basis for developing encoder-decoder LSTM models for your own sequence-to-sequence prediction problems.
In this tutorial, you will discover how to develop a sophisticated encoder-decoder recurrent neural network for sequence-to-sequence prediction problems with Keras.
After completing this tutorial, you will know:
- How to correctly define a sophisticated encoder-decoder model in Keras for sequence-to-sequence prediction.
- How to define a contrived yet scalable sequence-to-sequence prediction problem that you can use to evaluate the encoder-decoder LSTM model.
- How to apply the encoder-decoder LSTM model in Keras to address the scalable integer sequence-to-sequence prediction problem.
Let’s get started.

How to Develop an Encoder-Decoder Model for Sequence-to-Sequence Prediction in Keras
Photo by Björn Groß, some rights reserved.
Tutorial Overview
This tutorial is divided into 3 parts; they are:
- Encoder-Decoder Model in Keras
- Scalable Sequence-to-Sequence Problem
- Encoder-Decoder LSTM for Sequence Prediction
Python Environment
This tutorial assumes you have a Python SciPy environment installed. You can use either Python 2 or 3 with this tutorial.
You must have Keras (2.0 or higher) installed with either the TensorFlow or Theano backend.
The tutorial also assumes you have scikit-learn, Pandas, NumPy, and Matplotlib installed.
If you need help with your environment, see this post:
Encoder-Decoder Model in Keras
The encoder-decoder model is a way of organizing recurrent neural networks for sequence-to-sequence prediction problems.
It was originally developed for machine translation problems, although it has proven successful at related sequence-to-sequence prediction problems such as text summarization and question answering.
The approach involves two recurrent neural networks, one to encode the source sequence, called the encoder, and a second to decode the encoded source sequence into the target sequence, called the decoder.
The Keras deep learning Python library provides an example of how to implement the encoder-decoder model for machine translation (lstm_seq2seq.py) described by the libraries creator in the post: “A ten-minute introduction to sequence-to-sequence learning in Keras.”
Using the code in that example as a starting point, we can develop a generic function to define an encoder-decoder recurrent neural network. Below is this function named define_models().
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 |
# returns train, inference_encoder and inference_decoder models def define_models(n_input, n_output, n_units): # define training encoder encoder_inputs = Input(shape=(None, n_input)) encoder = LSTM(n_units, return_state=True) encoder_outputs, state_h, state_c = encoder(encoder_inputs) encoder_states = [state_h, state_c] # define training decoder decoder_inputs = Input(shape=(None, n_output)) decoder_lstm = LSTM(n_units, return_sequences=True, return_state=True) decoder_outputs, _, _ = decoder_lstm(decoder_inputs, initial_state=encoder_states) decoder_dense = Dense(n_output, activation='softmax') decoder_outputs = decoder_dense(decoder_outputs) model = Model([encoder_inputs, decoder_inputs], decoder_outputs) # define inference encoder encoder_model = Model(encoder_inputs, encoder_states) # define inference decoder decoder_state_input_h = Input(shape=(n_units,)) decoder_state_input_c = Input(shape=(n_units,)) decoder_states_inputs = [decoder_state_input_h, decoder_state_input_c] decoder_outputs, state_h, state_c = decoder_lstm(decoder_inputs, initial_state=decoder_states_inputs) decoder_states = [state_h, state_c] decoder_outputs = decoder_dense(decoder_outputs) decoder_model = Model([decoder_inputs] + decoder_states_inputs, [decoder_outputs] + decoder_states) # return all models return model, encoder_model, decoder_model |
The function takes 3 arguments, as follows:
- n_input: The cardinality of the input sequence, e.g. number of features, words, or characters for each time step.
- n_output: The cardinality of the output sequence, e.g. number of features, words, or characters for each time step.
- n_units: The number of cells to create in the encoder and decoder models, e.g. 128 or 256.
The function then creates and returns 3 models, as follows:
- train: Model that can be trained given source, target, and shifted target sequences.
- inference_encoder: Encoder model used when making a prediction for a new source sequence.
- inference_decoder Decoder model use when making a prediction for a new source sequence.
The model is trained given source and target sequences where the model takes both the source and a shifted version of the target sequence as input and predicts the whole target sequence.
For example, one source sequence may be [1,2,3] and the target sequence [4,5,6]. The inputs and outputs to the model during training would be:
1 2 3 |
Input1: ['1', '2', '3'] Input2: ['_', '4', '5'] Output: ['4', '5', '6'] |
The model is intended to be called recursively when generating target sequences for new source sequences.
The source sequence is encoded and the target sequence is generated one element at a time, using a “start of sequence” character such as ‘_’ to start the process. Therefore, in the above case, the following input-output pairs would occur during training:
1 2 3 4 |
t, Input1, Input2, Output 1, ['1', '2', '3'], '_', '4' 2, ['1', '2', '3'], '4', '5' 3, ['1', '2', '3'], '5', '6' |
Here you can see how the recursive use of the model can be used to build up output sequences.
During prediction, the inference_encoder model is used to encode the input sequence once which returns states that are used to initialize the inference_decoder model. From that point, the inference_decoder model is used to generate predictions step by step.
The function below named predict_sequence() can be used after the model is trained to generate a target sequence given a source sequence.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
# generate target given source sequence def predict_sequence(infenc, infdec, source, n_steps, cardinality): # encode state = infenc.predict(source) # start of sequence input target_seq = array([0.0 for _ in range(cardinality)]).reshape(1, 1, cardinality) # collect predictions output = list() for t in range(n_steps): # predict next char yhat, h, c = infdec.predict([target_seq] + state) # store prediction output.append(yhat[0,0,:]) # update state state = [h, c] # update target sequence target_seq = yhat return array(output) |
This function takes 5 arguments as follows:
- infenc: Encoder model used when making a prediction for a new source sequence.
- infdec: Decoder model use when making a prediction for a new source sequence.
- source:Encoded source sequence.
- n_steps: Number of time steps in the target sequence.
- cardinality: The cardinality of the output sequence, e.g. the number of features, words, or characters for each time step.
The function then returns a list containing the target sequence.
Scalable Sequence-to-Sequence Problem
In this section, we define a contrived and scalable sequence-to-sequence prediction problem.
The source sequence is a series of randomly generated integer values, such as [20, 36, 40, 10, 34, 28], and the target sequence is a reversed pre-defined subset of the input sequence, such as the first 3 elements in reverse order [40, 36, 20].
The length of the source sequence is configurable; so is the cardinality of the input and output sequence and the length of the target sequence.
We will use source sequences of 6 elements, a cardinality of 50, and target sequences of 3 elements.
Below are some more examples to make this concrete.
1 2 3 4 5 |
Source, Target [13, 28, 18, 7, 9, 5] [18, 28, 13] [29, 44, 38, 15, 26, 22] [38, 44, 29] [27, 40, 31, 29, 32, 1] [31, 40, 27] ... |
You are encouraged to explore larger and more complex variations. Post your findings in the comments below.
Let’s start off by defining a function to generate a sequence of random integers.
We will use the value of 0 as the padding or start of sequence character, therefore it is reserved and we cannot use it in our source sequences. To achieve this, we will add 1 to our configured cardinality to ensure the one-hot encoding is large enough (e.g. a value of 1 maps to a ‘1’ value in index 1).
For example:
1 |
n_features = 50 + 1 |
We can use the randint() python function to generate random integers in a range between 1 and 1-minus the size of the problem’s cardinality. The generate_sequence() below generates a sequence of random integers.
1 2 3 |
# generate a sequence of random integers def generate_sequence(length, n_unique): return [randint(1, n_unique-1) for _ in range(length)] |
Next, we need to create the corresponding output sequence given the source sequence.
To keep thing simple, we will select the first n elements of the source sequence as the target sequence and reverse them.
1 2 3 |
# define target sequence target = source[:n_out] target.reverse() |
We also need a version of the output sequence shifted forward by one time step that we can use as the mock target generated so far, including the start of sequence value in the first time step. We can create this from the target sequence directly.
1 2 |
# create padded input target sequence target_in = [0] + target[:-1] |
Now that all of the sequences have been defined, we can one-hot encode them, i.e. transform them into sequences of binary vectors. We can use the Keras built in to_categorical() function to achieve this.
We can put all of this into a function named get_dataset() that will generate a specific number of sequences that we can use to train a model.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
# prepare data for the LSTM def get_dataset(n_in, n_out, cardinality, n_samples): X1, X2, y = list(), list(), list() for _ in range(n_samples): # generate source sequence source = generate_sequence(n_in, cardinality) # define target sequence target = source[:n_out] target.reverse() # create padded input target sequence target_in = [0] + target[:-1] # encode src_encoded = to_categorical([source], num_classes=cardinality) tar_encoded = to_categorical([target], num_classes=cardinality) tar2_encoded = to_categorical([target_in], num_classes=cardinality) # store X1.append(src_encoded) X2.append(tar2_encoded) y.append(tar_encoded) return array(X1), array(X2), array(y) |
Finally, we need to be able to decode a one-hot encoded sequence to make it readable again.
This is needed for both printing the generated target sequences but also for easily comparing whether the full predicted target sequence matches the expected target sequence. The one_hot_decode() function will decode an encoded sequence.
1 2 3 |
# decode a one hot encoded string def one_hot_decode(encoded_seq): return [argmax(vector) for vector in encoded_seq] |
We can tie all of this together and test these functions.
A complete worked example is listed below.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 |
from random import randint from numpy import array from numpy import argmax from keras.utils import to_categorical # generate a sequence of random integers def generate_sequence(length, n_unique): return [randint(1, n_unique-1) for _ in range(length)] # prepare data for the LSTM def get_dataset(n_in, n_out, cardinality, n_samples): X1, X2, y = list(), list(), list() for _ in range(n_samples): # generate source sequence source = generate_sequence(n_in, cardinality) # define target sequence target = source[:n_out] target.reverse() # create padded input target sequence target_in = [0] + target[:-1] # encode src_encoded = to_categorical([source], num_classes=cardinality) tar_encoded = to_categorical([target], num_classes=cardinality) tar2_encoded = to_categorical([target_in], num_classes=cardinality) # store X1.append(src_encoded) X2.append(tar2_encoded) y.append(tar_encoded) return array(X1), array(X2), array(y) # decode a one hot encoded string def one_hot_decode(encoded_seq): return [argmax(vector) for vector in encoded_seq] # configure problem n_features = 50 + 1 n_steps_in = 6 n_steps_out = 3 # generate a single source and target sequence X1, X2, y = get_dataset(n_steps_in, n_steps_out, n_features, 1) print(X1.shape, X2.shape, y.shape) print('X1=%s, X2=%s, y=%s' % (one_hot_decode(X1[0]), one_hot_decode(X2[0]), one_hot_decode(y[0]))) |
Running the example first prints the shape of the generated dataset, ensuring the 3D shape required to train the model matches our expectations.
The generated sequence is then decoded and printed to screen demonstrating both that the preparation of source and target sequences matches our intention and that the decode operation is working.
1 2 |
(1, 6, 51) (1, 3, 51) (1, 3, 51) X1=[32, 16, 12, 34, 25, 24], X2=[0, 12, 16], y=[12, 16, 32] |
We are now ready to develop a model for this sequence-to-sequence prediction problem.
Encoder-Decoder LSTM for Sequence Prediction
In this section, we will apply the encoder-decoder LSTM model developed in the first section to the sequence-to-sequence prediction problem developed in the second section.
The first step is to configure the problem.
1 2 3 4 |
# configure problem n_features = 50 + 1 n_steps_in = 6 n_steps_out = 3 |
Next, we must define the models and compile the training model.
1 2 3 |
# define model train, infenc, infdec = define_models(n_features, n_features, 128) train.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['acc']) |
Next, we can generate a training dataset of 100,000 examples and train the model.
1 2 3 4 5 |
# generate training dataset X1, X2, y = get_dataset(n_steps_in, n_steps_out, n_features, 100000) print(X1.shape,X2.shape,y.shape) # train model train.fit([X1, X2], y, epochs=1) |
Once the model is trained, we can evaluate it. We will do this by making predictions for 100 source sequences and counting the number of target sequences that were predicted correctly. We will use the numpy array_equal() function on the decoded sequences to check for equality.
1 2 3 4 5 6 7 8 |
# evaluate LSTM total, correct = 100, 0 for _ in range(total): X1, X2, y = get_dataset(n_steps_in, n_steps_out, n_features, 1) target = predict_sequence(infenc, infdec, X1, n_steps_out, n_features) if array_equal(one_hot_decode(y[0]), one_hot_decode(target)): correct += 1 print('Accuracy: %.2f%%' % (float(correct)/float(total)*100.0)) |
Finally, we will generate some predictions and print the decoded source, target, and predicted target sequences to get an idea of whether the model is working as expected.
Putting all of these elements together, the complete code example is listed below.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 |
from random import randint from numpy import array from numpy import argmax from numpy import array_equal from keras.utils import to_categorical from keras.models import Model from keras.layers import Input from keras.layers import LSTM from keras.layers import Dense # generate a sequence of random integers def generate_sequence(length, n_unique): return [randint(1, n_unique-1) for _ in range(length)] # prepare data for the LSTM def get_dataset(n_in, n_out, cardinality, n_samples): X1, X2, y = list(), list(), list() for _ in range(n_samples): # generate source sequence source = generate_sequence(n_in, cardinality) # define padded target sequence target = source[:n_out] target.reverse() # create padded input target sequence target_in = [0] + target[:-1] # encode src_encoded = to_categorical([source], num_classes=cardinality) tar_encoded = to_categorical([target], num_classes=cardinality) tar2_encoded = to_categorical([target_in], num_classes=cardinality) # store X1.append(src_encoded) X2.append(tar2_encoded) y.append(tar_encoded) return array(X1), array(X2), array(y) # returns train, inference_encoder and inference_decoder models def define_models(n_input, n_output, n_units): # define training encoder encoder_inputs = Input(shape=(None, n_input)) encoder = LSTM(n_units, return_state=True) encoder_outputs, state_h, state_c = encoder(encoder_inputs) encoder_states = [state_h, state_c] # define training decoder decoder_inputs = Input(shape=(None, n_output)) decoder_lstm = LSTM(n_units, return_sequences=True, return_state=True) decoder_outputs, _, _ = decoder_lstm(decoder_inputs, initial_state=encoder_states) decoder_dense = Dense(n_output, activation='softmax') decoder_outputs = decoder_dense(decoder_outputs) model = Model([encoder_inputs, decoder_inputs], decoder_outputs) # define inference encoder encoder_model = Model(encoder_inputs, encoder_states) # define inference decoder decoder_state_input_h = Input(shape=(n_units,)) decoder_state_input_c = Input(shape=(n_units,)) decoder_states_inputs = [decoder_state_input_h, decoder_state_input_c] decoder_outputs, state_h, state_c = decoder_lstm(decoder_inputs, initial_state=decoder_states_inputs) decoder_states = [state_h, state_c] decoder_outputs = decoder_dense(decoder_outputs) decoder_model = Model([decoder_inputs] + decoder_states_inputs, [decoder_outputs] + decoder_states) # return all models return model, encoder_model, decoder_model # generate target given source sequence def predict_sequence(infenc, infdec, source, n_steps, cardinality): # encode state = infenc.predict(source) # start of sequence input target_seq = array([0.0 for _ in range(cardinality)]).reshape(1, 1, cardinality) # collect predictions output = list() for t in range(n_steps): # predict next char yhat, h, c = infdec.predict([target_seq] + state) # store prediction output.append(yhat[0,0,:]) # update state state = [h, c] # update target sequence target_seq = yhat return array(output) # decode a one hot encoded string def one_hot_decode(encoded_seq): return [argmax(vector) for vector in encoded_seq] # configure problem n_features = 50 + 1 n_steps_in = 6 n_steps_out = 3 # define model train, infenc, infdec = define_models(n_features, n_features, 128) train.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['acc']) # generate training dataset X1, X2, y = get_dataset(n_steps_in, n_steps_out, n_features, 100000) print(X1.shape,X2.shape,y.shape) # train model train.fit([X1, X2], y, epochs=1) # evaluate LSTM total, correct = 100, 0 for _ in range(total): X1, X2, y = get_dataset(n_steps_in, n_steps_out, n_features, 1) target = predict_sequence(infenc, infdec, X1, n_steps_out, n_features) if array_equal(one_hot_decode(y[0]), one_hot_decode(target)): correct += 1 print('Accuracy: %.2f%%' % (float(correct)/float(total)*100.0)) # spot check some examples for _ in range(10): X1, X2, y = get_dataset(n_steps_in, n_steps_out, n_features, 1) target = predict_sequence(infenc, infdec, X1, n_steps_out, n_features) print('X=%s y=%s, yhat=%s' % (one_hot_decode(X1[0]), one_hot_decode(y[0]), one_hot_decode(target))) |
Running the example first prints the shape of the prepared dataset.
1 |
(100000, 6, 51) (100000, 3, 51) (100000, 3, 51) |
Next, the model is fit. You should see a progress bar and the run should take less than one minute on a modern multi-core CPU.
1 |
100000/100000 [==============================] - 50s - loss: 0.6344 - acc: 0.7968 |
Next, the model is evaluated and the accuracy printed. We can see that the model achieves 100% accuracy on new randomly generated examples.
1 |
Accuracy: 100.00% |
Finally, 10 new examples are generated and target sequences are predicted. Again, we can see that the model correctly predicts the output sequence in each case and the expected value matches the reversed first 3 elements of the source sequences.
1 2 3 4 5 6 7 8 9 10 |
X=[22, 17, 23, 5, 29, 11] y=[23, 17, 22], yhat=[23, 17, 22] X=[28, 2, 46, 12, 21, 6] y=[46, 2, 28], yhat=[46, 2, 28] X=[12, 20, 45, 28, 18, 42] y=[45, 20, 12], yhat=[45, 20, 12] X=[3, 43, 45, 4, 33, 27] y=[45, 43, 3], yhat=[45, 43, 3] X=[34, 50, 21, 20, 11, 6] y=[21, 50, 34], yhat=[21, 50, 34] X=[47, 42, 14, 2, 31, 6] y=[14, 42, 47], yhat=[14, 42, 47] X=[20, 24, 34, 31, 37, 25] y=[34, 24, 20], yhat=[34, 24, 20] X=[4, 35, 15, 14, 47, 33] y=[15, 35, 4], yhat=[15, 35, 4] X=[20, 28, 21, 39, 5, 25] y=[21, 28, 20], yhat=[21, 28, 20] X=[50, 38, 17, 25, 31, 48] y=[17, 38, 50], yhat=[17, 38, 50] |
You now have a template for an encoder-decoder LSTM model that you can apply to your own sequence-to-sequence prediction problems.
Further Reading
This section provides more resources on the topic if you are looking to go deeper.
- A ten-minute introduction to sequence-to-sequence learning in Keras
- Keras seq2seq Code Example (lstm_seq2seq)
- Keras Functional API
- LSTM API in Keras
Summary
In this tutorial, you discovered how to develop an encoder-decoder recurrent neural network for sequence-to-sequence prediction problems with Keras.
Specifically, you learned:
- How to correctly define a sophisticated encoder-decoder model in Keras for sequence-to-sequence prediction.
- How to define a contrived yet scalable sequence-to-sequence prediction problem that you can use to evaluate the encoder-decoder LSTM model.
- How to apply the encoder-decoder LSTM model in Keras to address the scalable integer sequence-to-sequence prediction problem.
Do you have any questions?
Ask your questions in the comments below and I will do my best to answer.
Is this model suited for sequence regression too? For example the shampoo sales problem
You could try, but generally LSTMs do not perform well on autoregression problems:
https://machinelearningmastery.com/suitability-long-short-term-memory-networks-time-series-forecasting/
Hi. is it possible to have multi layers of LSTM in encoder and decoder in this code? thank you for your great blog
Yes, but I don’t have an example. For this specific case it would require some careful re-design.
How can I extract the bottleneck layer to extract the important features with sequence data?
You could access the returned states to get the context vector, but it does not help you understand which input features are relevant/important.
Thank you Jason!
You’re welcome.
Thanks for the wonderful tutorial, Jason!
I am facing an issue, though: I tried to execute your code as is (copy-pasted it), but it throws an error:
Using TensorFlow backend.
Traceback (most recent call last):
File “C:\Users\User\Documents\pystuff\keras_auto.py”, line 91, in
train, infenc, infdec = define_models(n_features, n_features, 128)
File “C:\Users\User\Documents\pystuff\keras_auto.py”, line 40, in define_models
encoder = LSTM(n_units, return_state=True)
File “C:\Users\User\Anaconda3\envs\py35\lib\site-packages\keras\legacy\interfaces.py”, line 88, in wrapper
return func(*args, **kwargs)
File “C:\Users\User\Anaconda3\envs\py35\lib\site-packages\keras\layers\recurrent.py”, line 949, in __init__
super(LSTM, self).__init__(**kwargs)
File C:\Users\User\Anaconda3\envs\py35\lib\site-packages\keras\layers\recurrent.py”, line 191, in __init__
super(Recurrent, self).__init__(**kwargs)
File “C:\Users\User\Anaconda3\envs\py35\lib\site-packages\keras\engine\topology.py”, line 281, in __init__
raise TypeError(‘Keyword argument not understood:’, kwarg)
TypeError: (‘Keyword argument not understood:’, ‘return_state’)
I am using an anaconda environment (python 3.5.3). What could have possibly gone wrong?
Perhaps confirm that you have the most recent version of Keras and TensorFlow installed.
I had the same problem, and updated Keras (to version 2.1.2) and TensorFlow (to version 1.4.0). The problem above was solved. However, I now see that the shapes of X1, X2, and y are ((100000, 1, 6, 51), (100000, 1, 3, 51), (100000, 1, 3, 51)) instead of ((100000, 6, 51), (100000, 3, 51), (100000, 3, 51)). Why could this be?
I’m not sure, perhaps related to recent API changes in Keras?
Here’s how I fixed the problem.
At the top of the code, include this line (before any ‘from numpy import’ statements:
import numpy as np
Change the get_dataset() function to the following:
Notice instead of returning array(X1), array(X2), array(y), we now return arrays that have been squeezed – one axis has been removed. We remove axis 1 because it’s the wrong shape for what we need.
The output is now as it should be (although I’m getting 98% accuracy instead of 100%).
Thanks for sharing.
Perhaps confirm that you have updated Keras to 2.1.2, it fixes bugs with to_categorical()?
Hi Jason!
Are the encoder-decoder networks suitable for time series classification?
In my experience LSTMs have not proven effective at autoregression compared to MLPs.
See this post for more details:
https://machinelearningmastery.com/suitability-long-short-term-memory-networks-time-series-forecasting/
Hi Jason,
running the get_dataset the function returns one additional row in the Arrays X1, X2, y:
(1,1,6,51) (1,1,3,51)(1,1,3,51).
This results form numpy.array(), before the X1, X2, y are lists with the correct sizes:
(1,6,51) (1,3,51)(1,3,51).
It seems the comand array() adds an additional dimension. Could you help on solving this problem?
how to train the keras ,it has to identify capital letter and small letter has to same
Please suggest any tactics for it.
Start by clearly defining your problem:
http://machinelearningmastery.com/how-to-define-your-machine-learning-problem/
Could you please give me a tutorial for a case where we are input to the seq2seq model is word embeddings and outputs is also word embeddings. I find it frustrating to see a Dense layer at the end. this is what is stopping a fully seq2seq model with input as well as output word embeddings.
Why would we have a word embedding at the output layer?
If embeddings are coming in, we’d want to have embeddings going out (auto-encoder). Imagine input = category (cat, dog, frog, wale, human). These animals are quite different, so we represent w/ embedding. Rather than having a dense output of 5 OHE, if an embedding is used for output, the assumption is that, especially if weights are shared between inputs + outputs, we could train the net with and give it more detail about what exactly each class is… rather than use something like cross entropy.
Ok.
Hi ! When adding dropout and recurrent_dropout as LSTM arguments on line 40 of the last complete code example with everything else being the same, the code went wrong. So how can I add dropout in this case? Thanks!
I give a worked example of dropout with LSTMs here:
https://machinelearningmastery.com/use-dropout-lstm-networks-time-series-forecasting/
Hi Jason! Thanks for your great effort to put encoder decoder implementations here. As Dinter mentioned, when dropout is added, the code runs well for training phase but gives following error during prediction.
InvalidArgumentError (see above for traceback): You must feed a value for placeholder tensor ‘lstm_1/keras_learning_phase’ with dtype bool
[[Node: lstm_1/keras_learning_phase = Placeholder[dtype=DT_BOOL, shape=, _device=”/job:localhost/replica:0/task:0/cpu:0″]()]]
How to fix the problem in this particular implementation?
[Note: Your worked example of dropout however worked for me, but the difference is you are adding layer by layer in sequential model there which is different than this example of encoder decoder]
Sorry to hear that, perhaps it’s a bug? See if you can reproduce the fault on a small standalone network?
When I tried with exactly the same network in the example you presented and added dropout=0.0 in line 10 and line 15 of define_models function, the program runs but for other values of dropout it gives error. Also changing the size of network, for example, number of units to 5, 10, 20 does give the same error.
Any idea to add dropout?
line 5: encoder = LSTM(n_units, return_state=True,dropout=0.0)
line 10: decoder_lstm = LSTM(n_units, return_sequences=True, return_state=True,dropout=0.0)
See this tutorial:
https://machinelearningmastery.com/use-dropout-lstm-networks-time-series-forecasting/
Hi Jason,
what is the difference between specifying the model input:
Model( [decoder_inputs] + decoder_states_inputs,….)
and this
Model([decoder_inputs,decoder_states_inputs],…)
Does the 1st version add the elements of decoder_states_inputs array to corresponding elements of decoder_inputs
Hi Jason, Thanks for sharing this tutorial. I am only confused when you are defining the model. This is the line:
train, infenc, infdec = define_models(n_features, n_features, 128)
It is a silly question but why n_features in this case is used for the n_input and for the n_output instead of n_input equal to 6 and n_output equal to 3 ?
I look forward to hearing from you soon.
Thanks
Good question, because the model only does one time step per call, so we walk the input/output time steps manually.
I would like to see the hidden states vector. Because there are 96 training samples, there would be 96 of these (each as a vector of length 4).
I added the “return_sequences=True” in the LSTM
model = Sequential()
model.add( LSTM(4, input_shape=(1, look_back), return_sequences=True ) )
model.add(Dense(1))
model.compile(loss=’mean_squared_error’, optimizer=’adam’)
model.fit(trainX, trainY, epochs=20, batch_size=20, verbose=2)
But, I get the error
Traceback (most recent call last):
File “”, line 1, in
File “E:\ProgramData\Anaconda3\lib\site-packages\keras\models.py”, line 965, in fit
validation_steps=validation_steps)
File “E:\ProgramData\Anaconda3\lib\site-packages\keras\engine\training.py”, line 1593, in fit
batch_size=batch_size)
File “E:\ProgramData\Anaconda3\lib\site-packages\keras\engine\training.py”, line 1430, in _standardize_user_data
exception_prefix=’target’)
File “E:\ProgramData\Anaconda3\lib\site-packages\keras\engine\training.py”, line 110, in _standardize_input_data
‘with shape ‘ + str(data_shape))
ValueError: Error when checking target: expected dense_24 to have 3 dimensions, but got array with shape (94, 1)
How can I make this model work, and also how can I view the hidden states for each of the input samples (should be 96 hidden states).
Thank you.
Return sequences does not return the hidden state, but instead the outcome from each time step.
This post might clear things up for you regarding outputs and hidden states:
https://machinelearningmastery.com/return-sequences-and-return-states-for-lstms-in-keras/
Hi Jason,
I like your blog posts. I have some code based on this post, but I get this error message all the time. There’s something wrong with my dense layer. Can you point it out? Here units is 300 and tokens_per_sentence is 25.
an error message:
ValueError: Error when checking target: expected dense_layer_b to have 2 dimensions, but got array with shape (1, 25, 300)
this is some code:
Perhaps the data does not match the model, you could change one or the other.
I’m eager to help, but I cannot debug the code for you sorry.
hi. so I think I needed to set ‘return_sequences’ to True for both lstm_a and lstm_b.
Dear Jason,
Do you think that this algorithm works for weather prediction? For example, by having the input integer variables as dew point, humidity, and temperature, to predict rainfall as output
Only for demonstration purposes.
In practice, weather forecasts are performed using simulations of physics models and are more accurate than small machine learning models.
Hi Jason.
I would like tou ask you, how could I use this model with float numbers? Yours training data seems like this:
[[
[0,0,0,0,1,0,0]
[0,0,1,0,0,1,0]
[0,1,0,0,0,0,0]]
.
.
.
]]
I would need something like this:
[[
[0.12354,0.9854,5875, 0.0659]
[0.12354,0.9854,5875, 0.0659]
[0.12354,0.9854,5875, 0.0659]
[0.12354,0.9854,5875, 0.0659]
]]
Whan i run your model with float numbers, the network doesn’t learn. Should I use some different LOSS function?
Thank you
The loss function is related to the output, if you have a real-valued output, consider mse or mae loss functions.
How to add bidirectional layer in encoder decoder architecture?
Use it directly on the encoder or decoder.
Here’s how to in Keras:
https://machinelearningmastery.com/develop-bidirectional-lstm-sequence-classification-python-keras/
Hi.
I would like to use your model with word embedding. I was inspired by https://blog.keras.io/a-ten-minute-introduction-to-sequence-to-sequence-learning-in-keras.html ->features -> word-level model
I decoded words in sentences as integer numbers. My input is list of sentences with 15 words. [[1,5,7,5,6,4,5, 10,15,12,11,10,8,1,2], […], [….], …]
My model seems:
encoder_inputs = Input(shape=(None,))
x = Embedding(num_encoder_tokens, latent_dim)(encoder_inputs)
x, state_h, state_c = LSTM(latent_dim, return_state=True)(x)
encoder_states = [state_h, state_c]
# Set up the decoder, using
encoder_states
as initial state.decoder_inputs = Input(shape=(None,))
x = Embedding(num_decoder_tokens, latent_dim)(decoder_inputs)
x = LSTM(latent_dim, return_sequences=True)(x, initial_state=encoder_states)
decoder_outputs = Dense(num_decoder_tokens, activation=’softmax’)(x)
# Define the model that will turn
#
encoder_input_data
&decoder_input_data
intodecoder_target_data
model = Model([encoder_inputs, decoder_inputs], decoder_outputs)
Whan i try to train the model, i get this error: expected dense_1 to have 3 dimensions, but got array with shape (10, 15). Could you help me with that please?
Thank you
Hi ,
I am trying to do seq2seq problem using Keras – LSTM. Predicted output words matches with most frequent words of the vocabulary built using the dataset. Not sure what could be the reason. After training is completed while trying to predict the output for the given input sequence model is predicting same output irrespective of the input seq. Also using separate embedding layer for both encoder and decoder.
Can you help me what could be the reason ?
Question elaborated here:
https://stackoverflow.com/questions/49638685/keras-seq2seq-model-predicting-same-output-for-all-test-inputs
Thanks.
It suggests the model has not learned the problem.
Here are a list of ideas to try:
http://machinelearningmastery.com/improve-deep-learning-performance/
Hi Jason,
In your current setup, how would you add a pre-trained embedding matrix, like glove?
Here are some examples of loading a word embedding:
https://machinelearningmastery.com/use-word-embedding-layers-deep-learning-keras/
Hi,
Could you help me with my problem? I think that a lot of required logic is implemented in your code but I am very beginner in python. I want to predict numbers according to input test sequence (seq2seq) so output of my decoder should by sequence of 6 numbers (1-7). U can imagine it as lottery prediction. I have very long input vector of numbers 1-7 (contains sequenses of 6 numbers) so I don’t need to generate test data. I just need to predict next 6 numbers which should be generated. 1,1,2,2,3 -> 3,4,4,5,5,6
Thank you
Perhaps here would be a good place for you to get stared with LSTMs:
https://machinelearningmastery.com/start-here/#lstm
Thank you Jason, it was very helpful for me. And can you give me advice or link how to correctly initiate and train network with multiple input sequences?
They can be side by side:
https://machinelearningmastery.com/multivariate-time-series-forecasting-lstms-keras/
Or separate inputs entirely:
https://machinelearningmastery.com/keras-functional-api-deep-learning/
Thank you Jason for this tutorial it gets me started. I tried to use larger sequence where input is 400 series of numbers and output is 40 numbers. I have problem with categorical because of index error and i don’t know how to set or get the the value for cardinality/n_features. Can you give me an idea on this?
Im also not clear if this is a classification type of model. Can you please confirm. Thanks
You can integer encode or one hot encode categorial input features. Here is a tutorial on the topic:
https://machinelearningmastery.com/how-to-one-hot-encode-sequence-data-in-python/
This post will make the distinction between classification and regression clear:
https://machinelearningmastery.com/classification-versus-regression-in-machine-learning/
Hi,
thank you for this totu, please i receive this error when i run your example :
ValueError: Error when checking input: expected input_1 to have 3 dimensions, but got array with shape (100000, 1, 6, 51)