Sequence Classification with LSTM Recurrent Neural Networks in Python with Keras

Last Updated on August 7, 2022

Sequence classification is a predictive modeling problem where you have some sequence of inputs over space or time, and the task is to predict a category for the sequence.

This problem is difficult because the sequences can vary in length, comprise a very large vocabulary of input symbols, and may require the model to learn the long-term context or dependencies between symbols in the input sequence.

In this post, you will discover how you can develop LSTM recurrent neural network models for sequence classification problems in Python using the Keras deep learning library.

After reading this post, you will know:

  • How to develop an LSTM model for a sequence classification problem
  • How to reduce overfitting in your LSTM models through the use of dropout
  • How to combine LSTM models with Convolutional Neural Networks that excel at learning spatial relationships

Kick-start your project with my new book Deep Learning for Natural Language Processing, including step-by-step tutorials and the Python source code files for all examples.

Let’s get started.

  • Jul/2016: First published
  • Update Oct/2016: Updated examples for Keras 1.1.0 andTensorFlow 0.10.0
  • Update Mar/2017: Updated example for Keras 2.0.2, TensorFlow 1.0.1 and Theano 0.9.0
  • Update May/2018: Updated code to use the most recent Keras API, thanks Jeremy Rutman
  • Update Jul/2022: Updated code for TensorFlow 2.x and added an example to use bidirectional LSTM
Sequence Classification with LSTM Recurrent Neural Networks in Python with Keras

Sequence classification with LSTM recurrent neural networks in Python with Keras
Photo by photophilde, some rights reserved.

Problem Description

The problem that you will use to demonstrate sequence learning in this tutorial is the IMDB movie review sentiment classification problem. Each movie review is a variable sequence of words, and the sentiment of each movie review must be classified.

The Large Movie Review Dataset (often referred to as the IMDB dataset) contains 25,000 highly polar movie reviews (good or bad) for training and the same amount again for testing. The problem is to determine whether a given movie review has a positive or negative sentiment.

The data was collected by Stanford researchers and used in a 2011 paper where a 50/50 split of the data was used for training and testing. An accuracy of 88.89% was achieved.

Keras provides built-in access to the IMDB dataset. The imdb.load_data() function allows you to load the dataset in a format ready for use in neural networks and deep learning models.

The words have been replaced by integers that indicate the ordered frequency of each word in the dataset. The sentences in each review are therefore comprised of a sequence of integers.

Word Embedding

You will map each movie review into a real vector domain, a popular technique when working with text—called word embedding. This is a technique where words are encoded as real-valued vectors in a high dimensional space, where the similarity between words in terms of meaning translates to closeness in the vector space.

Keras provides a convenient way to convert positive integer representations of words into a word embedding by an Embedding layer.

You will map each word onto a 32-length real valued vector. You will also limit the total number of words that you are interested in modeling to the 5000 most frequent words and zero out the rest. Finally, the sequence length (number of words) in each review varies, so you will constrain each review to be 500 words, truncating long reviews and padding the shorter reviews with zero values.

Now that you have defined your problem and how the data will be prepared and modeled, you are ready to develop an LSTM model to classify the sentiment of movie reviews.

Need help with LSTMs for Sequence Prediction?

Take my free 7-day email course and discover 6 different LSTM architectures (with code).

Click to sign-up and also get a free PDF Ebook version of the course.

Simple LSTM for Sequence Classification

You can quickly develop a small LSTM for the IMDB problem and achieve good accuracy.

Let’s start by importing the classes and functions required for this model and initializing the random number generator to a constant value to ensure you can easily reproduce the results.

You need to load the IMDB dataset. You are constraining the dataset to the top 5,000 words. You will also split the dataset into train (50%) and test (50%) sets.

Next, you need to truncate and pad the input sequences, so they are all the same length for modeling. The model will learn that the zero values carry no information. The sequences are not the same length in terms of content, but same-length vectors are required to perform the computation in Keras.

You can now define, compile and fit your LSTM model.

The first layer is the Embedded layer that uses 32-length vectors to represent each word. The next layer is the LSTM layer with 100 memory units (smart neurons). Finally, because this is a classification problem, you will use a Dense output layer with a single neuron and a sigmoid activation function to make 0 or 1 predictions for the two classes (good and bad) in the problem.

Because it is a binary classification problem, log loss is used as the loss function (binary_crossentropy in Keras). The efficient ADAM optimization algorithm is used. The model is fit for only two epochs because it quickly overfits the problem. A large batch size of 64 reviews is used to space out weight updates.

Once fit, you can estimate the performance of the model on unseen reviews.

For completeness, here is the full code listing for this LSTM network on the IMDB dataset.

Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.

Running this example produces the following output.

You can see that this simple LSTM with little tuning achieves near state-of-the-art results on the IMDB problem. Importantly, this is a template that you can use to apply LSTM networks to your own sequence classification problems.

Now, let’s look at some extensions of this simple model that you may also want to bring to your own problems.

LSTM for Sequence Classification with Dropout

Recurrent neural networks like LSTM generally have the problem of overfitting.

Dropout can be applied between layers using the Dropout Keras layer. You can do this easily by adding new Dropout layers between the Embedding and LSTM layers and the LSTM and Dense output layers. For example:

The full code listing example above with the addition of Dropout layers is as follows:

Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.

Running this example provides the following output.

You can see dropout having the desired impact on training with a slightly slower trend in convergence and, in this case, a lower final accuracy. The model could probably use a few more epochs of training and may achieve a higher skill (try it and see).

Alternately, dropout can be applied to the input and recurrent connections of the memory units with the LSTM precisely and separately.

Keras provides this capability with parameters on the LSTM layer, the dropout for configuring the input dropout, and recurrent_dropout for configuring the recurrent dropout. For example, you can modify the first example to add dropout to the input and recurrent connections as follows:

The full code listing with more precise LSTM dropout is listed below for completeness.

Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.

Running this example provides the following output.

You can see that the LSTM-specific dropout has a more pronounced effect on the convergence of the network than the layer-wise dropout. Like above, the number of epochs was kept constant and could be increased to see if the skill of the model could be further lifted.

Dropout is a powerful technique for combating overfitting in your LSTM models, and it is a good idea to try both methods. Still, you may get better results with the gate-specific dropout provided in Keras.

Bidirectional LSTM for Sequence Classification

Sometimes, a sequence is better used in reversed order. In those cases, you can simply reverse a vector x using the Python syntax x[::-1] before using it to train your LSTM network.

Sometimes, neither the forward nor the reversed order works perfectly, but combining them will give better results. In this case, you will need a bidirectional LSTM network.

A bidirectional LSTM network is simply two separate LSTM networks; one feeds with a forward sequence and another with reversed sequence. Then the output of the two LSTM networks is concatenated together before being fed to the subsequent layers of the network. In Keras, you have the function Bidirectional() to clone an LSTM layer for forward-backward input and concatenate their output. For example,

Since you created not one, but two LSTMs with 100 units each, this network will take twice the amount of time to train. Depending on the problem, this additional cost may be justified.

The full code listing with adding the bidirectional LSTM to the last example is listed below for completeness.

Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.

Running this example provides the following output.

It seems you can only get a slight improvement but with a significantly longer training time.

LSTM and Convolutional Neural Network for Sequence Classification

Convolutional neural networks excel at learning the spatial structure in input data.

The IMDB review data does have a one-dimensional spatial structure in the sequence of words in reviews, and the CNN may be able to pick out invariant features for the good and bad sentiment. This learned spatial feature may then be learned as sequences by an LSTM layer.

You can easily add a one-dimensional CNN and max pooling layers after the Embedding layer, which then feeds the consolidated features to the LSTM. You can use a smallish set of 32 features with a small filter length of 3. The pooling layer can use the standard length of 2 to halve the feature map size.

For example, you would create the model as follows:

The full code listing with CNN and LSTM layers is listed below for completeness.

Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.

Running this example provides the following output.

You can see that you achieve slightly better results than the first example, although with fewer weights and faster training time.

You might expect that even better results could be achieved if this example was further extended to use dropout.

Resources

Below are some resources if you are interested in diving deeper into sequence prediction or this specific example.

Summary

In this post, you discovered how to develop LSTM network models for sequence classification predictive modeling problems.

Specifically, you learned:

  • How to develop a simple single-layer LSTM model for the IMDB movie review sentiment classification problem
  • How to extend your LSTM model with layer-wise and LSTM-specific dropout to reduce overfitting
  • How to combine the spatial structure learning properties of a Convolutional Neural Network with the sequence learning of an LSTM

Do you have any questions about sequence classification with LSTMs or this post? Ask your questions in the comments, and I will do my best to answer.

Develop Deep Learning models for Text Data Today!

Deep Learning for Natural Language Processing

Develop Your Own Text models in Minutes

...with just a few lines of python code

Discover how in my new Ebook:
Deep Learning for Natural Language Processing

It provides self-study tutorials on topics like:
Bag-of-Words, Word Embedding, Language Models, Caption Generation, Text Translation and much more...

Finally Bring Deep Learning to your Natural Language Processing Projects

Skip the Academics. Just Results.

See What's Inside

691 Responses to Sequence Classification with LSTM Recurrent Neural Networks in Python with Keras

  1. Avatar
    Atlant July 29, 2016 at 7:15 pm #

    It’s geat!

  2. Avatar
    Sahil July 30, 2016 at 9:34 pm #

    Hey Jason,

    Congrats brother, for continuous great and easy to adapt/understanding lessons. I am just curious to know unsupervised and reinforced neural nets, any tutorials you have?

    Regards,
    Sahil

    • Avatar
      Jason Brownlee July 31, 2016 at 7:09 am #

      Thanks Sahil.

      Sorry, no tutorials on unsupervised learning or reinforcement learning with neural nets just yet. Soon though.

  3. Avatar
    Søren Pallesen August 1, 2016 at 1:43 am #

    Hi, great stuff you are publishing here thanks.

    Would this network architecture work for predicting profitability of a stock based time series data of the stock price.

    For example with data samples of daily stock prices and trading volumes with 5 minute intervals from 9.30am to 1pm paired with YES or NO to the stockprice increasing by more than 0.5% the rest of the trading day?

    Each trading day is one sample and th3 entire data set woule for example the last 1000 trading days.

    If this network architecture is not suitable what other would you suggest testing our?

    Again thanks for this super resdource.

    • Avatar
      Jason Brownlee August 1, 2016 at 6:24 am #

      Thanks Søren.

      Sure, it would be worth trying, but I am not an expert on the stock market.

  4. Avatar
    Naufal August 12, 2016 at 2:50 pm #

    So, the end result of this tutorial is a model. Could you give me an example how to use this model to predict a new review, especially using new vocabularies that don’t present in training data? Many thanks..

    • Avatar
      Jason Brownlee August 15, 2016 at 12:31 pm #

      I don’t have an example Naufal, but the new example would have to encode words using the same integers and embed the integers into the same word mapping.

      • Avatar
        Faraaz Mohammed March 13, 2017 at 7:21 pm #

        Thanks Jason for excellent article.
        to predict i did below things, please correct i am did wrong. you said to embed..i didnt get that. how to do that.

        text = numpy.array([‘this is excellent sentence’])
        #print(text.shape)
        tk = keras.preprocessing.text.Tokenizer( nb_words=2000, lower=True,split=” “)
        tk.fit_on_texts(text)
        prediction = model.predict(numpy.array(tk.texts_to_sequences(text)))
        print(prediction)

        • Avatar
          Faraaz Mohammed March 13, 2017 at 9:43 pm #

          Thanks Jason for excellent article.
          to predict i did below things, please correct i am did wrong. you said to embed..i didnt get that. how to do that.

          text = numpy.array([‘this is excellent sentence’])
          #print(text.shape)
          tk = keras.preprocessing.text.Tokenizer( nb_words=2000, lower=True,split=” “)
          tk.fit_on_texts(text)
          prediction = model.predict(sequence.pad_sequences(tk.texts_to_sequences(text),maxlen=max_review_length))
          print(prediction)

          • Avatar
            Gopichand March 9, 2018 at 9:51 pm #

            You can use below code to predict sentiment of new reviews..

            However, it will simply skip words out of its vocabulary..
            Also, you can try increasing “top_words” value before training so that u can cover more number of words.

          • Avatar
            Jason Brownlee March 10, 2018 at 6:27 am #

            Thanks for sharing!

        • Avatar
          Jason Brownlee March 14, 2017 at 8:14 am #

          Embed refers to the word embedding layer:
          https://keras.io/layers/embeddings/

    • Avatar
      Aviral Goyal March 19, 2018 at 8:49 am #

      def conv_to_proper_format(sentence):
      >>sentence=text.text_to_word_sequence(sentence,filters=’!”#$%&()*+,-./:;?@[\\]^_`{|}~\t\n’,lower=True,split=” “)
      >>sentence=numpy.array([word_index[word] if word in word_index else 0 for word in sentence])#Encoding into sequence of integers
      >>sentence[sentence>5000]=2
      >>L=500-len(sentence)
      >>sentence=numpy.pad(sentence, (L,0), ‘constant’)
      >>sentence=sentence.reshape(1,-1)
      >>return sentence
      Use this function on ur review to convert into proper format and then model.predict(review1) will give u answer.

  5. Avatar
    Joey August 24, 2016 at 6:45 am #

    Hello Jason! Great tutorials!

    When I attempt this tutorial, I get the error message from imdb.load_data :

    TypeError: load_data() got an unexpected keyword argument ‘test_split’

    I tried copying and pasting the entire source code but this line still had the same error.

    Can you think of any underlying reason that this is not executing for me?

    • Avatar
      Jason Brownlee August 24, 2016 at 8:33 am #

      Sorry to hear that Joey. It looks like a change with Keras v1.0.7.

      I get the same error if I run with version 1.0.7. I can see the API doco still refers to the test_split argument here: https://keras.io/datasets/#imdb-movie-reviews-sentiment-classification

      I can see that the argument was removed from the function here:
      https://github.com/fchollet/keras/blob/master/keras/datasets/imdb.py

      Option 1) You can remove the argument from the function to use the default test 50/50 split.

      Option 2) You can downgrade Keras to version 1.0.6:

      Remember you can check your Keras version on the command line with:

      I will look at updating the example to be compatible with the latest Keras.

      • Avatar
        Joey August 25, 2016 at 4:27 am #

        I got it working! Thanks so much for all of the help Jason!

        • Avatar
          Jason Brownlee August 25, 2016 at 5:07 am #

          Glad to hear it Joey.

          • Avatar
            Jason Brownlee October 7, 2016 at 2:22 pm #

            I have updated the examples in the post to match Keras 1.1.0 and TensorFlow 0.10.0.

  6. Avatar
    Chong Wang August 29, 2016 at 11:13 am #

    Hi, Jason.

    A quick question:
    Based on my understanding, padding zero in front is like labeling ‘START’. Otherwise it is like labeling ‘END’. How should I decide ‘pre’ padding or ‘post’ padding? Does it matter?

    Thanks.

    • Avatar
      Jason Brownlee August 30, 2016 at 8:24 am #

      I don’t think I understand the question, sorry Chong.

      Consider trying both padding approaches on your problem and see what works best.

      • Avatar
        Chong Wang October 6, 2016 at 7:49 am #

        Hi, Jason.

        Thanks for your reply.

        I have another quick question in section “LSTM For Sequence Classification With Dropout”.

        model.add(Embedding(top_words, embedding_vector_length, input_length=max_review_length, dropout=0.2))
        model.add(Dropout(0.2))

        Here I see two dropout layers. The second one is easy to understand: For each time step, It just randomly deactivates 20% numbers in the output embedding vector.

        The first one confuses me: Does it do dropout on the input? For each time step, the input of the embedding layers should be only one index of the top words. In other words, the input is one single number. How can we dropout it? (Or do you mean drop the input indices of 20% time steps?)

        • Avatar
          Jason Brownlee October 6, 2016 at 9:50 am #

          Great question, I believe it drops out weights from the input nodes from the embedded layer to the hidden layer.

          You can learn more about dropout here:
          https://machinelearningmastery.mystagingwebsite.com/dropout-regularization-deep-learning-models-keras/

          • Avatar
            Kuow January 12, 2017 at 4:10 pm #

            Can the dropout applied in the Embedding layer be thought of as randomly removing a word in a sentence and forcing the classification not to rely on any word?

          • Avatar
            Jason Brownlee January 13, 2017 at 9:09 am #

            I don’t see why not – off the cuff.

        • Avatar
          Kevin February 13, 2019 at 12:37 pm #

          Why did you say the input is a number? It should be a sentence transformed to it’s word embedding. For example, if length of embedding vector is 50 and sentence has at most 500 words, this will be a (500,50) matrix. I think, what is does is to drop some features in the embedding vector, out of total of 50.

    • Avatar
      Li Yu July 16, 2019 at 2:09 pm #

      Hi,

      It may be a late reply, but I would like to share my thinkings on prepadding. The reason for prepadding instead of postpadding is that for recurrent neural networks such as LSTMs, words appear earlier gets less updates, whereas words appear most recently will have a bigger impact on weight updates, according to the chain rule. Padding zeros at begining of a sequence will let rear content be better learned.

      Li

  7. Avatar
    Harish August 30, 2016 at 8:10 pm #

    Hi Jason

    Thanks for providing such easy explanations for these complex topics.

    In this tutorial, Embedding layer is used as the input layer as the data is a sequence of words.

    I am working on a problem where I have a sequence of images as an example and a particular label is assigned to each example. The number of images in the sequence will vary from example to example. I have the following questions:
    1) Can I use a LSTM layer as an input layer?

    2) If the input layer is a LSTM layer, is there still a need to specify the max_len (which is constraint mentioning the maximum number of images an example can have)

    Thanks in advance.

    • Avatar
      Jason Brownlee August 31, 2016 at 9:28 am #

      Interesting problem Harish.

      I would caution you to consider a suite of different ways of representing this problem, then try a few to see what works.

      My gut suggests using CNNs on the front end for the image data and then an LSTM in the middle and some dense layers on the backend for transforming the representation into a prediction.

      I hope that helps.

      • Avatar
        Harish August 31, 2016 at 3:31 pm #

        Thanks you very much Jason.

        Can you please let me know how to deal with sequences of different length without padding in this problem. If padding is required, how to choose the max. length for padding the sequence of images.

        • Avatar
          Jason Brownlee September 1, 2016 at 7:56 am #

          Padding is required for sequences of variable length.

          Choose a max length based on all the data you have available to evaluate.

          • Avatar
            Harish September 1, 2016 at 5:12 pm #

            Thank you for your time and suggestion Jason.

            Can you please explain what masking the input layer means and how can it be used to handle padding in keras.

    • Avatar
      Sreekar Reddy September 5, 2016 at 10:37 pm #

      Hi Harish,
      I am working on a similar problem and would like to know if you continued on this problem? What worked and what did not?

      Thanks in advance

  8. Avatar
    Gciniwe September 1, 2016 at 6:26 am #

    Hi Jason,

    Thanks for this tutorial. It’s so helpful! I would like to adapt this to my own problem. I’m working on a problem where I have a sequence of acoustic samples. The sequences vary in length, and I know the identity of the individual/entity producing the signal in each sequence. Since these sequences have a temporal element to them, (each sequence is a series in time and sequences belonging to the same individual are also linked temporally), I thought LSTM would be the way to go.
    According to my understanding, the Embedding layer in this tutorial works to add an extra dimension to the dataset since the LSTM layer takes in 3D input data.

    My question is is it advisable to use LSTM layer as a first layer in my problem, seeing that Embedding wouldn’t work with my non-integer acoustic samples? I know that in order to use LSTM as my first layer, I have to somehow reshape my data in a meaningful way so that it meets the requirements of the inputs of LSTM layer. I’ve already padded my sequences so my dataset is currently a 2D tensor. Padding with zeros however was not ideal because some of the original acoustic sample values are zero, representing a zero-pressure level. So I’ve manually padded using a different number.

    I’m planning to use a stack of LSTM layers and a Dense layer at the end of my Sequential model.

    P.s. I’m new to Keras. I’d appreciate any advice you can give.

    Thank you

    • Avatar
      Jason Brownlee September 1, 2016 at 8:03 am #

      I’m glad it was useful Gciniwe.

      Great question and hard to answer. I would caution you to review some literature for audio-based applications of LSTMs and CNNs and see what representations were used. The examples I’ve seen have been (sadly) trivial.

      Try LSTM as the first layer, but also experiment with CNN (1D) then LSTM for additional opportunities to pull out structure. Perhaps also try Dense then LSTM. I would use one or more Dense on the output layers.

      Good luck, I’m very interested to hear what you come up with.

    • Avatar
      Harish September 1, 2016 at 4:07 pm #

      Hi Gciniwe

      Its interesting to see that I am also working on a similar problem. I work on speech and image processing. I have a small doubt. Please may I know how did you choose the padding values. Because in images also, we will have zeros and unable to understand how to do padding.

      Thanks in advance

  9. Avatar
    nick September 20, 2016 at 2:16 am #

    When i run the above code , i am getting the following error
    :MemoryError: alloc failed
    Apply node that caused the error: Alloc(TensorConstant{(1L, 1L, 1L) of 0.0}, TensorConstant{24}, Elemwise{Composite{((i0 * i1) // i2)}}[(0, 0)].0, TensorConstant{280})
    Toposort index: 145
    Inputs types: [TensorType(float32, (True, True, True)), TensorType(int64, scalar), TensorType(int64, scalar), TensorType(int64, scalar)]
    Inputs shapes: [(1L, 1L, 1L), (), (), ()]
    Inputs strides: [(4L, 4L, 4L), (), (), ()]
    Inputs values: [array([[[ 0.]]], dtype=float32), array(24L, dtype=int64), array(-450L, dtype=int64), array(280L, dtype=int64)]
    Outputs clients: [[IncSubtensor{Inc;:int64:}(Alloc.0, Subtensor{::int64}.0, Constant{24}), IncSubtensor{InplaceInc;int64::}(Alloc.0, IncSubtensor{Inc;:int64:}.0, Constant{0}), forall_inplace,cpu,grad_of_scan_fn}(TensorConstant{24}, Elemwise{tanh}.0, Subtensor{int64:int64:int64}.0, Alloc.0, Elemwise{Composite{(i0 – sqr(i1))}}.0, Subtensor{int64:int64:int64}.0, Subtensor{int64:int64:int64}.0,
    any idea why? i am using theano 0.8.2 and keras 1.0.8

    • Avatar
      Jason Brownlee September 20, 2016 at 8:34 am #

      I’m sorry to hear that Nick, I’ve not seen this error.

      Perhaps try the Theano backend and see if that makes any difference?

      • Avatar
        Shristi Baral November 9, 2016 at 9:57 pm #

        I got the same problem and I have no clue how to solve it..

  10. Avatar
    Deepak October 3, 2016 at 2:41 am #

    Hi Jason,

    I have one question. Can I use RNN LSTM for Time Series Sales Analysis. I have only one input every day sales of last one year. so total data points is around 278 and I want to predict for next 6 months. Will this much data points is sufficient for using RNN techniques.. and also can you please explain what is difference between LSTM and GRU and where to USE LSTM or GRU

    • Avatar
      Jason Brownlee October 3, 2016 at 5:21 am #

      Hi Deepak, My advice would be to try LSTM on your problem and see.

      You may be better served using simpler statistical methods to forecast 60 months of sales data.

  11. Avatar
    Corne Prinsloo October 13, 2016 at 5:59 pm #

    Jason, this is great. Thanks!

    I would also love to see some unsupervised learning to know how it works and what the applications are.

    • Avatar
      Jason Brownlee October 14, 2016 at 8:59 am #

      Hi Corne,

      I tend not to write tutorials on unsupervised techniques (other than feature selection) as I do not find methods like clustering useful in practice on predictive modeling problems.

  12. Avatar
    Jeff Wu October 14, 2016 at 5:49 am #

    Thanks for writing this tutorial. It’s very helpful. Why do LSTMs not require normalization of their features’ values?

    • Avatar
      Jason Brownlee October 14, 2016 at 9:09 am #

      Hi Jeff, great question.

      Often you can get better performance with neural networks when the data is scaled to the range of the transfer function. In this case we use a sigmoid within the LSTMs so we find we get better performance by normalizing input data to the range 0-1.

      I hope that helps.

      • Avatar
        Yuri July 8, 2017 at 5:37 am #

        Hi Jason, thanks for a great tutorial!

        I am trying to normalize the data, basically dividing each element in X by the largest value (in this case 5000), since X is in range [0, 5000]. And I get much worse performance. Any idea why? Thanks!

  13. Avatar
    Lau MingFei October 19, 2016 at 10:21 pm #

    Hi, Jason! Your tutorial is very helpful. But I still have a question about using dropouts in the LSTM cells. What is the difference of the actual effects of droupout_W and dropout_U? Should I just set them the same value in most cases? Could you recommend any paper related to this topic? Thank you very much!

    • Avatar
      Jason Brownlee October 20, 2016 at 8:38 am #

      I would refer you to the API Lau:
      https://keras.io/layers/recurrent/#lstm

      dropout_W: float between 0 and 1. Fraction of the input units to drop for input gates.
      dropout_U: float between 0 and 1. Fraction of the input units to drop for recurrent connections.

      Generally, I recommend testing different values and see what works. In practice setting them to the same values might be a good starting point.

  14. Avatar
    Jeff October 24, 2016 at 10:16 pm #

    Hello,
    thanks for the nice article. I have a question about the data encoding: “The words have been replaced by integers that indicate the ordered frequency of each word in the dataset”.

    What exactly does ordered frequency mean? For instance, is the most frequent word encoded as 0 or 4999 in the end?

    • Avatar
      Jason Brownlee October 25, 2016 at 8:23 am #

      Great question Jeff.

      I believe the most frequent word is 1.

      I believe 0 was left for use as padding or when we want to trip low frequency words.

  15. Avatar
    Mazen October 25, 2016 at 12:27 am #

    Thank you for your very useful posts.
    I have a question.
    In the last example (CNN&LSTM), It’s clear that we gained a faster training time, but how can we know that CNN is suitable here for this problem as a prior layer to LSTM. What does the spatial structure here mean? So, If I understand how to decide whether a dataset X has a spatial structure, then will this be a suitable clue to suggest a prior CNN to LSTM layer in a sequence-based problem?

    Thanks,
    Mazen

    • Avatar
      Jason Brownlee October 25, 2016 at 8:28 am #

      Hi Mazen,

      The spatial structure is the order of words. To the CNN, they are just a sequence of numbers, but we know that that sequence has structure – the words (numbers used to represent words) and their order matter.

      Model selection is hard. Often you want to pick the model that has the mix of the best performance and lowest complexity (easy to understand, maintain, retrain, use in production).

      Yes, if a problem has some spatial structure (image, text, etc.) try a method that preserves that structure, like a CNN.

  16. Avatar
    Eduardo November 8, 2016 at 3:31 am #

    Hi Jason, great post!

    I have been trying to use your experiment to classify text that come from several blogs for gender classification. However, I am getting a low accuracy close to 50%. Do you have any suggestions in terms of how I could pre-process my data to fit in the model? Each blog text has approximately 6000 words and i am doing some research know to see what I can do in terms of pre-processing to apply to your model.

    Thanks

    • Avatar
      Jason Brownlee November 8, 2016 at 9:57 am #

      Wow, cool project Eduardo.

      I wonder if you can cut the problem back to just the first sentence or first paragraph of the post.

      I wonder if you can use a good word embedding.

      I also wonder if you can use a CNN instead od LSTM to make the classification – or at least compare CNN alone to CNN + LSTM and double done on what works best.

      Generally, here is a ton of advice for improving performance on deep learning problems:
      https://machinelearningmastery.mystagingwebsite.com/improve-deep-learning-performance/

  17. Avatar
    Emma November 11, 2016 at 4:24 pm #

    Hi Jason,

    Thank you for your time for this very helpful tutorial.
    I was wondering if you would have considered to randomly shuffle the data prior to each epoch of training?

    Thanks

  18. Avatar
    Shashank November 11, 2016 at 4:51 pm #

    Hi Jason,

    Can you please show how to convert all the words to integers so that they are ready to be feed into keras models?

    Here in IMDB they are directly working on integers but I have a problem where I have got many rows of text and I have to classify them(multiclass problem).

    Also in LSTM+CNN i am getting an error:

    ERROR (theano.gof.opt): Optimization failure due to: local_abstractconv_check
    ERROR (theano.gof.opt): node: AbstractConv2d{border_mode=’half’, subsample=(1, 1), filter_flip=True, imshp=(None, None, None, None), kshp=(None, None, None, None)}(DimShuffle{0,2,1,x}.0, DimShuffle{3,2,0,1}.0)
    ERROR (theano.gof.opt): TRACEBACK:
    ERROR (theano.gof.opt): Traceback (most recent call last):
    File “C:\Anaconda2\lib\site-packages\theano\gof\opt.py”, line 1772, in process_node
    replacements = lopt.transform(node)
    File “C:\Anaconda2\lib\site-packages\theano\tensor\nnet\opt.py”, line 402, in local_abstractconv_check
    node.op.__class__.__name__)
    AssertionError: AbstractConv2d Theano optimization failed: there is no implementation available supporting the requested options. Did you exclude both “conv_dnn” and “conv_gemm” from the optimizer? If on GPU, is cuDNN available and does the GPU support it? If on CPU, do you have a BLAS library installed Theano can link against?

    I am running keras in windows with Theano backend and CPU only.

    Thanks

  19. Avatar
    Thang Le November 14, 2016 at 4:16 am #

    Hi Jason,

    Can you tell me how the IMDB database contains its data please? Text or vector?

    Thanks.

    • Avatar
      Jason Brownlee November 14, 2016 at 7:45 am #

      Hi Thang Le, the IMDB dataset was originally text.

      The words were converted to integers (one int for each word), and we model the data as fixed-length vectors of integers. Because we work with fixed-length vectors, we must truncate and/or pad the data to this fixed length.

      • Avatar
        Le Thang November 14, 2016 at 2:03 pm #

        Thank you Jason!

        So when we call (X_train, y_train), (X_test, y_test) = imdb.load_data(), X_train[i] will be vector. And if it is vector then how can I convert my text data to vector to use in this?

        • Avatar
          Jason Brownlee November 15, 2016 at 7:40 am #

          Hi Le Thang, great question.

          You can convert each character to an integer. Then each input will be a vector of integers. You can then use an Embedding layer to convert your vectors of integers to real-valued vectors in a projected space.

  20. Avatar
    Quan Xiu November 14, 2016 at 6:36 pm #

    Hi Jason,

    As I understand, X_train is a variable sequence of words in movie review for input then what does Y_train stand for?

    Thank you!

    • Avatar
      Jason Brownlee November 15, 2016 at 7:53 am #

      Hi Quan Xiu, Y is the output variables and Y_train are the output variables for the training dataset.

      For this dataset, the output values are movie sentiment values (positive or negative sentiment).

      • Avatar
        Quan Xiu November 15, 2016 at 2:38 pm #

        Thank you Jason,

        So when we take X_test as input, the output will be compared to y_test to compute the accuracy, right?

        • Avatar
          Jason Brownlee November 16, 2016 at 9:24 am #

          Yes Quan Xiu, the predictions made by the model are compared to y_test.

  21. Avatar
    Herbert Kruitbosch November 22, 2016 at 7:47 pm #

    The performance of this LSTM-network is lower than TFIDF + Logistic Regression:

    https://gist.github.com/prinsherbert/92313f15fc814d6eed1e36ab4df1f92d

    Are you sure the hidden state’s aren’t just counting words in a very expensive manor?

    • Avatar
      Jason Brownlee November 23, 2016 at 8:55 am #

      It’s true that this example is not tuned for optimal performance Herbert.

      • Avatar
        Herbert Kruitbosch November 23, 2016 at 8:57 pm #

        This leaves a rather important question, does it actually learn more complicated features than word-counts? And do LSTM’s do so in general? Obviously there is literature out there on this topic, but I think your post is somewhat misleading w.r.t. power of LSTM’s. It would be great to see an example where an LSTM outperforms a TFIDF, and give an idea about the type and size of the data that you need. (Thank you for the quick reply though 🙂 )

        LSTM’s are only neat if they actually remember contextual things, not if they just fit simple models and take a long time to do so.

        • Avatar
          Jason Brownlee November 24, 2016 at 10:39 am #

          I agree Herbert.

          LSTMs are hard to use. Initially, I wanted to share how to get up and running with the technique. I aim to come back to this example and test new configurations to get more/most from the method.

          • Avatar
            Herbert Kruitbosch December 8, 2016 at 12:29 am #

            That would be great! It would also be nice to get an idea about the size of data needed for good performance (and of course, there are thousands of other open questions :))

  22. Avatar
    Huy Huynh November 23, 2016 at 4:08 am #

    Many thank your post, Jason. It’s helpful

    I have some short questions. First, I feel nervous when chose hyperparameter for the model such as length vectors (32), a number of Embedding unit (500), a number of LSTM unit(100), most frequent words(5000). It depends on dataset, doesn’t it? How can we choose parameter?

    Second, I have dataset about news daily for predicting the movement of price stock market. But, each news seems more words than each comment imdb dataset. Average each news about 2000 words, can you recommend me how I can choose approximate hyperparameter.

    Thank you, (P/s sorry about my English if have any mistake)

    • Avatar
      Jason Brownlee November 23, 2016 at 9:03 am #

      Hi Huy,

      We have to choose something. It is good practice to grid search over each of these parameters and select for best performance and model robustness.

      Perhaps you can work with the top n most common words only.
      Perhaps you can use a projection or embedding of the article.
      Perhaps you can use some classical NLP methods on the text first.

      • Avatar
        Huy Huynh November 24, 2016 at 3:47 am #

        Thank you for your quick response,

        I am a newbie in Deep Learning, It seems really difficult to choose relevant parameters.

    • Avatar
      Ben H October 12, 2020 at 9:16 am #

      How do you get to the 16,750? 25,000/64 batches is 390.

      Thanks!

  23. Avatar
    Huy Huynh November 23, 2016 at 4:16 am #

    According to my understanding, When training, the number of epoch often more than 100 to evaluate supervised machine learning result. But, In your example or Keras sample, It’s only between 3-15 epochs. Can you explain about that?
    Thanks,

    • Avatar
      Jason Brownlee November 23, 2016 at 9:03 am #

      Epochs can vary from algorithm and problem. There are no rules Huy, let results guide everything.

      • Avatar
        Huy Huynh November 24, 2016 at 3:49 am #

        So, How we can choose the relevant number of epochs?

        • Avatar
          Jason Brownlee November 24, 2016 at 10:41 am #

          Trial and error on your problem, and carefully watch the learning rate on your training and validation datasets.

  24. Avatar
    Søren Pallesen November 27, 2016 at 8:08 pm #

    Im looking for benchmarks of LSTM networks on Keras with known/public datasets.

    Could you share what hardware configuration the examples in this post was run on (GPU/CPU/RAM etc)?

    Thx

  25. Avatar
    Mike November 30, 2016 at 11:41 am #

    Is it possible in Keras to obtain the classifier output as each word propagates through the network?

    • Avatar
      Jason Brownlee December 1, 2016 at 7:14 am #

      Hi Mike, you can make one prediction at a time.

      Not sure about seeing how the weights propagate through – I have not done this myself with Keras.

  26. Avatar
    lim December 9, 2016 at 4:50 am #

    Hi,

    What are some of the changes you have to make in your binary classification model to work for the multi-label classification?

    • Avatar
      lim December 9, 2016 at 11:03 am #

      also instead of a given input data such as imdb in number digit format, what steps do you take to process your raw text format dataset to make it compatible like imdb?

  27. Avatar
    Hossein December 9, 2016 at 9:19 am #

    Great Job Jason.

    I liked it very much…
    I would really appreciate it if you tell me how we can do Sequence Clustering with LSTM Recurrent Neural Networks (Unsupervised learning task).

    • Avatar
      Jason Brownlee December 10, 2016 at 8:01 am #

      Sorry, I have not used LSTMs for clustering. I don’t have good advice for you.

  28. Avatar
    ryan December 10, 2016 at 8:56 pm #

    Hi Jason,

    Your book is really helpful for me. I have a question about time sequence classifier. Let’s say, I have 8 classes of time sequence data, each class has 200 training data and 50 validation data, how can I estimate the classification accuracy based on all the 50 validation data per class (sth. like log-maximum likelihood) using scikit-learn package or sth. else? It would be very appreciated that you could give me some advice. Thanks a lot in advance.

    Best regards,
    Ryan

  29. Avatar
    Shashank December 12, 2016 at 5:09 pm #

    Hi Jason,

    Which approach is better Bags of words or word embedding for converting text to integer for correct and better classification?

    I am a little confused in this.

    Thanks in advance

    • Avatar
      Jason Brownlee December 13, 2016 at 8:05 am #

      Hi Shashank, embeddings are popular at the moment. I would suggest both and see what representation works best for you.

  30. Avatar
    Mango December 19, 2016 at 1:34 am #

    Hi Jason, thank you for your tutorials, I find them very clear and useful, but I have a little question when I try to use it to another problem setting..

    as is pointed out in your post, words are embedding as vectors, and we feed a sequence of vectors to the model, to do classification.. as you mentioned cnn to deal with the implicit spatial relation inside the word vector(hope I got it right), so I have two questions related to this operation:

    1. Is the Embedding layer specific to word, that said, keras has its own vocabulary and similarity definition to treat our feeded word sequence?

    2. What if I have a sequence of 2d matrix, something like an image, how should I transform them to meet the required input shape to the CNN layer or directly the LSTM layer? For example, combined with your tutorial for the time series data, I got an trainX of size (5000, 5, 14, 13), where 5000 is the length of my samples, and 5 is the look_back (or time_step), while I have a matrix instead of a single value here, but I think I should use my specific Embedding technique here so I could pass a matrix instead of a vector before an CNN or a LSTM layer….

    Sorry if my question is not described well, but my intention is really to get the temporal-spatial connection lie in my data… so I want to feed into my model with a sequence of matrix as one sample.. and the output will be one matrix..

    thank you for your patience!!

  31. Avatar
    Banbhrani December 19, 2016 at 7:04 pm #

    33202176/33213513 [============================>.] – ETA: 0s 19800064/33213513 [================>………….] – ETA: 207s – ETA: 194s____________________________________________________________________________________________________
    Layer (type) Output Shape Param # Connected to
    ====================================================================================================
    embedding_1 (Embedding) (None, 500, 32) 160000 embedding_input_1[0][0]
    ____________________________________________________________________________________________________
    lstm_1 (LSTM) (None, 100) 53200 embedding_1[0][0]
    ____________________________________________________________________________________________________
    dense_1 (Dense) (None, 1) 101 lstm_1[0][0]
    ====================================================================================================
    Total params: 213301
    ____________________________________________________________________________________________________
    None
    Epoch 1/3

    Kernel died, restarting

    • Avatar
      Ryuta February 18, 2021 at 7:26 am #

      pip install -U numpy

      solves the problem

  32. Avatar
    Eka January 10, 2017 at 12:49 pm #

    Hi Jason,
    Thanks for the nice article. Because IMDb data is very large I tried to replace it with spam dataset. What kind of changes should I make in the original code to run it. I have asked this question in stack-overflow but sofar no answer. http://stackoverflow.com/questions/41322243/how-to-use-keras-rnn-for-text-classification-in-a-dataset ?

    Any help?

    • Avatar
      Jason Brownlee January 11, 2017 at 9:25 am #

      Great idea!

      I would suggest you encode each word as a unique integer. Then you can start using it as an input for the Embedding layer.

  33. Avatar
    AKSHAY January 11, 2017 at 6:55 am #

    Hi Jason,

    Thanks for the post. It is really helpful. Do I need to configure for the tensorflow to make use of GPU when I run this code or does it automatically select GPU if its available?

    • Avatar
      Jason Brownlee January 11, 2017 at 9:31 am #

      These examples are small and run fast on the CPU, no GPU is required.

      • Avatar
        AKSHAY January 11, 2017 at 12:49 pm #

        I tried it on CPU and it worked fine. I plan to replicate the process and expand your method for a different use case. Its high dimensional compared to this. Do you have a tutorial on making use of GPU as well? Can I implement the same code in gpu or is the format all different?

        • Avatar
          Jason Brownlee January 12, 2017 at 9:24 am #

          Same code, use of the backend is controlled by the Theano or TensorFlow backend that you’re using.

  34. Avatar
    Stan January 12, 2017 at 4:12 am #

    Jason,

    Thanks for the interesting tutorial! Do you have any thoughts on how the LSTM trained to classify sequences could then be turned around to generate new ones? I.e. now that it “knows” what a positive review sounds like, could it be used to generate new and novel positive reviews? (ignore possible nefarious uses for such a setup 🙂 )

    There are several interesting examples of LSTMs being trained to learn sequences to generate new ones… however, they have no concept of classification, or understanding what a “good” vs “bad” sequence is, like yours does. So, I’m essentially interested in merging the two approaches — train an LSTM with a number of “good” and “bad” sequences, and then have it generate new “good” ones.

    Any thoughts or pointers would be very welcome!

    • Avatar
      Jason Brownlee January 12, 2017 at 9:37 am #

      I have not explored this myself. I don’t have any offhand quips, it requires careful thought I think.

      This post might help with the other side of the coin, the generation of text:
      https://machinelearningmastery.mystagingwebsite.com/text-generation-lstm-recurrent-neural-networks-python-keras/

      I would love to hear how you get on.

      • Avatar
        Stan January 13, 2017 at 1:31 am #

        Thanks, if you do come up with any crazy ideas, please let me know :).

        One pedestrian approach I’m thinking off is having the classifier used to simply “weed out” the undesired inputs, and then feed only desired ones into a new LSTM which can then be used to generate more sequences like those, using the approach like the one in your other post.

        That doesn’t seem ideal, as it feels like I’m throwing away some of the knowledge about what makes an undesired sequence undesired… But, on the other hand, I have more freedom in selecting the classifier algorithm.

  35. Avatar
    Albert January 27, 2017 at 9:03 am #

    Thank you for this tutorial.

    Regarding the variable length problem, though other people have asked about it, I have a further question.

    If I have a dataset with high deviation of length, say, some text has 10 words, some has 100000 words. Therefore, if I just choose 1000 as my maxlen, I lost a lot of information.

    If I choose 100000 as the maxlen, I consume too much computational power.

    Is there a another way of dealing with that? (Without padding or truncating)

    Also, can you write a tutorial about how to use word2vec pretrained embedding with RNN?

    Not word2vec itself, but how to use the result of word2vec.

    The counting based word representation lost too much semantic information.

    • Avatar
      Jason Brownlee January 27, 2017 at 12:28 pm #

      Great questions Albert.

      I don’t have a good off-the-cuff answer for you re long sequences. It requires further research.

      Keen to tackle the suggested tutorial using word2vc representations.

  36. Avatar
    Charles January 29, 2017 at 4:33 am #

    I only have biology background, but I can reproduced the results. Great.

  37. Avatar
    Jax February 1, 2017 at 6:27 am #

    Hi Jason, i noted you mentioned updated examples for Tensorflow 0.10.0. I can only see Keras codes, am i missing something?

    Thanks.

    • Avatar
      Jason Brownlee February 1, 2017 at 10:54 am #

      Hi Jax,

      Keras runs on top of Theano and TensorFlow. One or the other are required to use Keras.

      I was leaving a note that the example was tested on an updated version of Keras using an updated version of the TensorFlow backend.

  38. Avatar
    Kakaio February 13, 2017 at 8:30 am #

    I am not sure I understand how recurrence and sequence work here.
    I would expect you’d feed a sequence of one-hot vectors for each review, where each one-hot vector represents one word. This way, you would not need a maximum length for the review (nor padding), and I could see how you’d use recurrence one word at a time.
    But I understand you’re feeding the whole review in one go, so it looks like e feedforward.
    Can you explain that?

    • Avatar
      Jason Brownlee February 13, 2017 at 9:16 am #

      Hi Kakaio,

      Yes, indeed we are feeding one review at a time. It is the input structured we’d use for a MLP.

      Internally, consider the LSTM network as building up state on the sequence of words in the review and from that sequence learning the appropriate sentiment.

      • Avatar
        Kakaop February 13, 2017 at 9:42 am #

        how is the LSTM building up state one the sequence of words leveraging recurrence? you’re feeding the LSTM all the sequence at the same time, there’re no time steps.

        • Avatar
          Jason Brownlee February 13, 2017 at 9:53 am #

          Hi Kakaop, quite right. The example does not leverage recurrence.

  39. Avatar
    Sweta March 1, 2017 at 8:15 pm #

    From this tutorial how can I predict the test values and how to write to a file? Are these predicted values generate in the encoded format?

  40. Avatar
    Bruce Ho March 2, 2017 at 9:29 am #

    Guys, this is a very clear and useful article, and thanks for the Keras code. But I can’t seem to find any sample code for running the trained model to make a prediction. It is not in imdb.py, that just does the evaluation. Does any one have some sample code for prediction to show?

    • Avatar
      Jason Brownlee March 3, 2017 at 7:39 am #

      Hi Bruce,

      You can fit the model on all of the training data, than forecast for new inputs using:

      Does that help?

  41. Avatar
    Bruce Ho March 3, 2017 at 4:47 pm #

    That’s not the hard part. However, I may have figured out what I need to know. That is take the result returned by model.predict and take the last item in the array as the classifications. Any one disagrees?

  42. Avatar
    JUNETAE KIM March 17, 2017 at 10:40 pm #

    Hi, it’s the awesome tutorial.

    I have a question regarding your model.

    I am new to RNN, so the question would be stupid.

    Inputting word embedding layer is crucial in your setting – sequence classification rather than prediction of the next word??

    • Avatar
      Jason Brownlee March 18, 2017 at 7:48 am #

      Generally, a word embedding (or similar projection) is a good representation for NLP problems.

  43. Avatar
    DanielHa March 24, 2017 at 2:41 am #

    Hi Jason,
    great tutorial. Really helped me alot.

    I’ve noticed that in the first part you called fit() on the model with “validation_data=(X_test, y_test)”. This isn’t in the final code summary. So I wondered if that’s just a mistake or if you forgot it later on.

    But then again it seems wrong to me to use the test data set for validation. What are your thoughts on this?

    • Avatar
      Jason Brownlee March 24, 2017 at 8:00 am #

      The model does not use the test data at this point, it is just evaluated on it. It helps to get an idea of how well the model is doing.

  44. Avatar
    Liam March 24, 2017 at 6:25 pm #

    What happen if the code uses LSTM with 100 units and sentence length is 200. Does that mean only the first 100 words in the sentence act as inputs, and the last 100 words will be ignored?

    • Avatar
      Jason Brownlee March 25, 2017 at 7:34 am #

      No, the number of units in the hidden layer and the length of sequences are different configuration parameters.

      You can have 1 unit with 2K sequence length if you like, the model just won’t learn it.

      I hope that helps.

  45. Avatar
    Danielha March 28, 2017 at 7:29 pm #

    Hi Jason,
    in the last part the LSTM layer returns a sequence, right? And after that the dense layer only takes one parameter. How does the dense layer know that it should take the last parameter? Or does it even take the last parameter?

    • Avatar
      Jason Brownlee March 29, 2017 at 9:06 am #

      No, in this case each LSTM unit is not returning a sequence, just a single value.

  46. Avatar
    Prashanth R March 28, 2017 at 9:25 pm #

    Hi Jason,
    Very interesting and useful article. Thank you for writing such useful articles. I have had the privilege of going through your other articles which are very useful.

    Just wanted to ask, how do we encode a new test data to make same format as required for the program. There is no dictionary involved i guess for the conversion. So how can we go about for this conversion? For instance, consider a sample sentence “Very interesting article on sequence classification”. What will be encoded numeric representation?
    Thanks in advance

    • Avatar
      Jason Brownlee March 29, 2017 at 9:07 am #

      Great question.

      You can encode the chars as integers (integer encode), then encode the integers as boolean vectors (one hot encode).

      • Avatar
        Manish Sihag June 1, 2017 at 11:13 pm #

        Great article Jason. I wanted to continue the question Prashanth asked, how to pre-process the user input. If we use CountVectorizer() sure, it will convert it in the required form but then words will not be same as before. Even a single new word will create extra element. Can you please explain, how to pre-process the user input such that it resembles with the trained model. Thanks in advance.

        • Avatar
          Jason Brownlee June 2, 2017 at 1:00 pm #

          You can allocate an alphabet of 1M words, all integers from 1 to 1M, then use that encoding for any words you see.

          The idea is to have a buffer in your encoding scheme.

          Also, if you drop all low-frequency words, this will give you more buffer. Often 25K words is more than enough.

          • Avatar
            Manish Sihag June 4, 2017 at 4:56 pm #

            Your answer honestly cleared many doubts. Thanks a lot for the quick reply. I have an idea now about, what to do.

          • Avatar
            Jason Brownlee June 5, 2017 at 7:39 am #

            I’m glad to hear that Manish.

  47. Avatar
    trangtruong March 29, 2017 at 7:19 pm #

    I have dataset just a vector feature like [1, 0,5,1,1,2,1] -> y just 0,1 binary or category like 0,1,2,3. I want to use LSTM to classify binary or category, how can i do it guys, i just add LSTM with Dense, but LSTM need input 3 dimension but Dense just 2 dimension. I know i need time sequence, i try to find out more but can’t get nothing. Can u explain and tell me how. pls, Thank you so much

    • Avatar
      Jason Brownlee March 30, 2017 at 8:51 am #

      You may want to consider a seq2seq structure with an encoder for the input sequence and a decoder for the output sequence.

      Something like:

      I have a tutorial on this scheduled.

      I hope that helps.

      • Avatar
        trangtruong March 30, 2017 at 5:51 pm #

        thanks you, i will try to find out, then response you.

      • Avatar
        trangtruong March 30, 2017 at 6:17 pm #

        Ay, i have 1 question in another your post about why i use function evaluate model.evaluate(x_test, y_test) to get accuracy score of model after train with train dataset , but its return result >1 in some case, i don’t know why, it make me can’t beleive in this function. Can you explain for me why?

        • Avatar
          Jason Brownlee March 31, 2017 at 5:52 am #

          Sorry I don’t understand your question, perhaps you can rephrase it?

          • Avatar
            trangtruong March 31, 2017 at 12:34 pm #

            I don’t know the result return by function evaluate >1, but i thinks it should just from 0 -> 1 ( model.evaluate(x_test,y_test) with model i had trained it before with train dataset)

      • Avatar
        trangtruong March 30, 2017 at 9:07 pm #

        Hi Jason, Can you explain your code step by step Jason, i have follow tutorial : https://blog.keras.io/building-autoencoders-in-keras.html but i have some confused to understand. :(.

        • Avatar
          Jason Brownlee March 31, 2017 at 5:54 am #

          If you have questions about that post, I would recommend contacting the author.

  48. Avatar
    Mazhar Ali April 6, 2017 at 7:36 pm #

    Hi Dear Joson
    I am new to deep learning and intends to work on keras or tensorflow for corpus analysis. May you help me or send me basic tutorials
    regards
    Mazhar Ali

  49. Avatar
    Ady April 7, 2017 at 12:52 am #

    Thank you for your friendly explanation.
    I bought a lot of help from your books.

    Are you willing to add examples of fit_generator and batch normalization to the IMDB LSTM example?

    I was told to use the fit_generator function to process large amounts of data.
    If there is an example, it will be very helpful to book buyers.

    • Avatar
      Jason Brownlee April 9, 2017 at 2:44 pm #

      I would like to add this kind of example in the future. Thanks for the suggestion.

  50. Avatar
    Fernando López April 8, 2017 at 4:31 am #

    Hi Jason

    I would like to know where I can read more about dropout and recurrent_dropout. Do you know some paper or something to explore it?

    Thanks!

  51. Avatar
    Donato Tiano April 14, 2017 at 1:14 am #

    Hi Jason,
    I’ve a problem with the shape of my dataset

    x_train = numpy.random.random((100, 3))
    y_train = uti.to_categorical(numpy.random.randint(10, size=(100, 1)), num_classes=10)
    model = Sequential()
    model.add(Conv1D(2,2,activation=’relu’,input_shape=x_train.shape))
    model.compile(loss=’binary_crossentropy’, optimizer=’adam’, metrics=[‘accuracy’])
    model.fit(x_train,y_train, epochs=150)

    I have tried to create random a dataset, and pass at CNN with 1D, but I don’t know why, the Conv1D accepts my shape (I think that put automaticly the value None), but the fit doesn’t accept (I think becaus the Conv1D have accepted 3 dimension). I have this error:

    ValueError: Error when checking model input: expected conv1d_1_input to have 3 dimensions, but got array with shape (100, 3)

    • Avatar
      Jason Brownlee April 14, 2017 at 8:47 am #

      Your input data must be 3d, even if one or two of those dimensions have a width of 1.

  52. Avatar
    Len April 16, 2017 at 1:24 pm #

    Hi Jason,

    Thanks for an awesome article!

    I wanted to ask for some suggestions on training my data set. The data I have are 1d measurements taken at a time with a binary label for each instance.

    Thanks to your blogs I successfully have built a LSTM and it does a great job at classifying the dominant class. The main issue is that the proportion of 0s to 1s is very high. There are about .03 the number of 1s as there are 0s. For the most part, the 1s occur when there are high values of these measurements. So, I figured I could get a LSTM model to make better predictions if a model could see the last “p” measurements. Intuitively, it would recognize an abnormal increase in the measurement and associate that behavior with a output of 1.

    Knowing some of this basic basckground could you suggest a structure that may
    1.) help exploit the structure of abnormally high measurement with outputs of 1
    2.) help with the low exposure to 1 instances

    Thanks for any help or references!

    cheers!

  53. Avatar
    m91 April 25, 2017 at 1:29 am #

    Hi, that’s a great tutorial!
    Just wondering: as you are paddin with zeros, why aren’t you setting the Embedding layer flag mask_zero to True?
    Without doing that, the padded symbols will influence the computation of the cost function, isn’t it?

    • Avatar
      Jason Brownlee April 25, 2017 at 7:50 am #

      That is a good suggestion. Perhaps that flag did not exist when I write the example.

      If you see a benefit, let me know.

  54. Avatar
    Saurabh Nair April 27, 2017 at 11:44 am #

    Hi Jason,
    Great tutorial! Helped a lot.
    I’ve got a theoretical question though. Is sequence classification just based on the last state of the LSTM or do you have to take the dense layer for all the hidden units(100 LSTM in this case). Is sequence classification possible just based on the last state? Most of the implementations I see, there is dense and a softmax to classify the sequence.

    • Avatar
      Jason Brownlee April 28, 2017 at 7:27 am #

      We do need the dense layer to interpret what the LSTMs have learned.

      The LSTMs are modeling the problem as a function of the input time steps and of the internal state.

  55. Avatar
    nguyennguyen April 27, 2017 at 12:22 pm #

    Hi Jason,
    Can you tell me about time_step in LSTM?, with example or something to easy understand. If my data have 2 dimension, [[1,2]…[1,3]] ouput: [1,…0], so with keras, LSTM layer need 3 dimension, so i just can reshape input data to 3 dimension with time_step =1, can train it like this?, with time_step> 1 is it better, i want to know mean of time_step in LSTM, thank you so much for read my question.

    • Avatar
      Jason Brownlee April 28, 2017 at 7:29 am #

      You can, but it is better to provide the sequence information in the time step.

      The LSTM is developing a function of observations over prior time steps.

  56. Avatar
    Carlos de Sá May 5, 2017 at 2:42 pm #

    Hi Jason,
    First of ali, thank you for your great explanation.
    I am considering setting up an aws g2.2xlarge instance according to your explanation in another post . Would you have some benchmark (ex: time of 1 epoch of one of the above examples) so that I can compre with my current hardware?

    • Avatar
      Jason Brownlee May 6, 2017 at 7:34 am #

      Sorry, I don’t have any execution time benchmarks.

      I generally see great benefit from large AWS instances in terms getting access to a lot more memory (larger datasets) when using LSTMs.

      I see a lot more benefit running CNNs on GPUs than LSTMs on GPUs.

  57. Avatar
    Iris L May 5, 2017 at 4:41 pm #

    Hi Jason,

    I am also curious in the problem of padding. I think pad_sequence is the way to obtain fixed length of sequences. However, instead of padding zeros, can we actually scale the data?

    Then, the problem is 1) if scaling sequences will distort the meaning of sentences given that sentences are represented as sequences and 2) how to choose a good scale factor.

    Thank you.

    • Avatar
      Jason Brownlee May 6, 2017 at 7:38 am #

      Great question.

      Generally, a good way to reduce the length of sequences of words is first remove the low frequency words, then truncate the sequence to a desired length or pad out to the length.

  58. Avatar
    Chao May 13, 2017 at 2:42 am #

    For using LSTM, why we still need to scale the input sequence to the fixed size? Why not build some model like seq2seq just multi-input to one-output

    • Avatar
      Jason Brownlee May 13, 2017 at 6:16 am #

      Even with seq2seq, you must vectorize your input data.

  59. Avatar
    Chao May 13, 2017 at 2:58 am #

    I saw the data loaded from IMDB, which has already be encoded as numbers.
    Why do we need another Embedding layer to encoding?

    (X_train, y_train), (X_test, y_test) = imdb.load_data(num_words=top_words)
    print X_train[1]
    The output is
    [1, 194, 1153, 194, 2, 78, 228, 5, 6, 1463, 4369,…

    • Avatar
      Jason Brownlee May 13, 2017 at 6:17 am #

      The embedding is a more expressive representation which results in better performance.

  60. Avatar
    Amir May 18, 2017 at 1:45 am #

    Thanks Jason for your article and answering comments also. Can I use this approach to solve my issue described in this stack-overflow question? Please take a look at that.

    http://stackoverflow.com/questions/43987060/pattern-recognition-or-named-entity-recognition-for-information-extraction-in-nl/43991328#43991328

    • Avatar
      Jason Brownlee May 18, 2017 at 8:39 am #

      Perhaps, I would recommend finding some existing research to template a solution.

  61. Avatar
    vijay May 19, 2017 at 10:34 pm #

    Thanks Jason for your article. I have implemented a CNN followed by LSTM neural network model in keras for sentence classification. But after 1 or 2 epoch my training accuracy and validation accuracy stuck to some number and do not change. Like it has stuck in some local minima or some other reason. What should i do to resolve this problem. If i use only CNN in my model then both training and validation accuracy converges to good accuracy. Can you help me in this. I couldn’t identify the problem.

    Here is the training and validation accuracy.

    Epoch 1/20
    1472/1500 [============================>.] -8s – loss: 0.5327 – acc: 0.8516 – val_loss: 0.3925 – val_acc: 0.8460
    Epoch 2/20
    1500/1500 [==============================] – 10s – loss: 0.3733 – acc: 0.8531 – val_loss: 0.3755 – val_acc: 0.8460
    Epoch 3/20
    1500/1500 [==============================] – 8s – loss: 0.3695 – acc: 0.8529 – val_loss: 0.3764 – val_acc: 0.8460
    Epoch 4/20
    1500/1500 [==============================] – 8s – loss: 0.3700 – acc: 0.8531 – val_loss: 0.3752 – val_acc: 0.8460
    Epoch 5/20
    1500/1500 [==============================] – 8s – loss: 0.3706 – acc: 0.8528 – val_loss: 0.3763 – val_acc: 0.8460
    Epoch 6/20
    1500/1500 [==============================] – 8s – loss: 0.3703 – acc: 0.8528 – val_loss: 0.3760 – val_acc: 0.8460
    Epoch 7/20
    1500/1500 [==============================] – 8s – loss: 0.3700 – acc: 0.8528 – val_loss: 0.3764 – val_acc: 0.8460
    Epoch 8/20
    1500/1500 [==============================] – 8s – loss: 0.3697 – acc: 0.8531 – val_loss: 0.3752 – val_acc: 0.8460
    Epoch 9/20
    1500/1500 [==============================] – 8s – loss: 0.3708 – acc: 0.8530 – val_loss: 0.3758 – val_acc: 0.8460
    Epoch 10/20
    1500/1500 [==============================] – 8s – loss: 0.3703 – acc: 0.8527 – val_loss: 0.3760 – val_acc: 0.8460
    Epoch 11/20
    1500/1500 [==============================] – 8s – loss: 0.3698 – acc: 0.8531 – val_loss: 0.3753 – val_acc: 0.8460
    Epoch 12/20
    1500/1500 [==============================] – 8s – loss: 0.3699 – acc: 0.8531 – val_loss: 0.3758 – val_acc: 0.8460
    Epoch 13/20
    1500/1500 [==============================] – 8s – loss: 0.3698 – acc: 0.8531 – val_loss: 0.3753 – val_acc: 0.8460
    Epoch 14/20
    1500/1500 [==============================] – 10s – loss: 0.3700 – acc: 0.8533 – val_loss: 0.3769 – val_acc: 0.8460
    Epoch 15/20
    1500/1500 [==============================] – 9s – loss: 0.3704 – acc: 0.8532 – val_loss: 0.3768 – val_acc: 0.8460
    Epoch 16/20
    1500/1500 [==============================] – 8s – loss: 0.3699 – acc: 0.8531 – val_loss: 0.3756 – val_acc: 0.8460
    Epoch 17/20
    1500/1500 [==============================] – 8s – loss: 0.3699 – acc: 0.8531 – val_loss: 0.3753 – val_acc: 0.8460
    Epoch 18/20
    1500/1500 [==============================] – 8s – loss: 0.3696 – acc: 0.8531 – val_loss: 0.3753 – val_acc: 0.8460
    Epoch 19/20
    1500/1500 [==============================] – 8s – loss: 0.3696 – acc: 0.8531 – val_loss: 0.3757 – val_acc: 0.8460
    Epoch 20/20
    1500/1500 [==============================] – 8s – loss: 0.3701 – acc: 0.8531 – val_loss: 0.3754 – val_acc: 0.8460

  62. Avatar
    Libardo May 22, 2017 at 2:38 am #

    Jason, thaks for yor great post.
    I am beginner with DL.
    If I need to include some behavioral features to this analysis, let say: age, genre, zipcode, time (DD:HH), season (spring/summer/autumn/winter)… could you give me some hints to implement that?

    TIA

    • Avatar
      Jason Brownlee May 22, 2017 at 7:54 am #

      Each would be a different feature on the input data.

      Remember, input data must be structured [samples, timesteps, features].

      • Avatar
        Usama Kaleem December 5, 2017 at 8:55 pm #

        My data is of shape (8000,30) and i need to use 30 timesteps.
        I do
        model.add(LSTM(200, input_shape=(timesteps,train.shape[1])))

        but when i run the code it give me and error
        ValueError: Error when checking input: expected lstm_20_input to have 3 dimensions, but got array with shape (8000, 30)
        How to change the shape of the training data in the format you mentioned
        Remember, input data must be structured [samples, timesteps, features]. (8000,30,30)

  63. Avatar
    Kadir Habib May 29, 2017 at 2:32 pm #

    Hi,
    How can I use my own data, instead of IMDB for training?

    Thanks
    Kadir

    • Avatar
      Jason Brownlee June 2, 2017 at 12:20 pm #

      You will need to encode the text data as integers.

  64. Avatar
    Kunal chakraborty May 29, 2017 at 10:44 pm #

    Hello Dr.Jason,

    I am very thankful for your blog-posts. They are undoubtedly one of the best on the internet.
    I have one doubt though. Why did you use the validation dataset as x_test and y_test in the very first example that you described. I just find it a little bit confusing.

    Thanks in advance

    • Avatar
      Jason Brownlee June 2, 2017 at 12:27 pm #

      Thanks.

      I did it to give an idea of skill of the model as it was being fit. You do not need to do this.

  65. Avatar
    ray May 31, 2017 at 2:33 pm #

    i added dropout on CNN+RNN like you said and it gives me 87.65% accuracy. I still not clear the purpose of combining both as i thought CNN is for 2D+ input like image or video. But anyway, your tutorial gives me a great starting point to dive into RNN. Many thanks!

  66. Avatar
    Fred June 3, 2017 at 8:21 pm #

    Thanks for the post.

    If I am understanding right, after the embedding layer EACH SAMPLE (each review) in the training data is transformed into a 32 by 500 matrix. When taking an analogy from audio spectrogram, it is a 32-dim spectrum with 500 time frames long.

    With the equivalence or analogy above, I can perform audio waveform classification with audio raw spectrogram as the input and class labels (whatever it is, might be audio quality good or bad) in exact the same code in this post (except the embedding layer). Is it correct?

    Furthermore, I am wondering about why should the length of the input be the same, i.e. 500 in the post. If I am doing in the context of online training, in which a single sample is fed into the model at a time (batch size is 1), there should be no concern about varying length of samples right? That is, each sample (of varying length without padding) and its target are used to train the model one after another, and there is no worry about the varying length. Is it just the issue of implementation in Keras, or in theory the input length of each sample should be the same?

    • Avatar
      Jason Brownlee June 4, 2017 at 7:51 am #

      Hi Fred,

      Yes, try it.

      The vectorized input requires all inputs to have the same length (for efficiencies in the backend libraries). You use zero-padding (and even masking) to meet this requirement.

      The size parameters are fixed in the definition of the network I believe. You could do tricks from batch to batch re-defining+compiling your network as you go, but that would not be efficient.

      • Avatar
        Fred June 4, 2017 at 11:22 am #

        Thanks for your reply, I will try it.

        I was just wondering if the RNN or LSTM in theory requires every input to be in a same length.
        As far as I know, one of the superiorities of RNN over DNN is that it accepts varying-length input.

        It doesn’t bother me If the requirement is for efficiency issue in Keras, and the zero’s (if zero-padding is used) is regarded to carry zero information. In the audio spectrogram case, would you recommend zero-padding the raw waveform (one-D) or spectrogram (two-D)? With the analogy to your post, the choice would be the former though.

        • Avatar
          Jason Brownlee June 5, 2017 at 7:37 am #

          Hi Fred,

          Padding is not required by LSTMs in theory, it is only a limitation of efficient implementations that require vectorized inputs.

          A fair tradeoff for most applications perhaps.

  67. Avatar
    adit agrawal June 20, 2017 at 3:29 am #

    Hi Jason,

    Is there a way in RNN (keras implementation) to control for the attention of the LSTM.
    I have a dataset where 100 time series inputs are fed as sequence. I want the LSTM to give more importance to the last 10 time series inputs.
    Can it be done?

    Thanks in advance.

    • Avatar
      Jason Brownlee June 20, 2017 at 6:41 am #

      Yes, but you must code a custom layer to do the attention.

      I hope to cover attention models for LSTMs soon.

  68. Avatar
    vivz June 22, 2017 at 4:17 pm #

    Hi Jason,
    After building and saving the model I want to use it for a prediction on new texts but I don’t know how to preprocess the plain text in order to use them for predictions. I have searched about it and find this way:
    text = np.array([‘this is a random sentence’])
    tk = keras.preprocessing.text.Tokenizer( nb_words=2000, lower=True,split=” “)
    predictions = loaded_model.predict(np.array(tk.fit_on_texts(text)))

    but this is not working for me and showing this error:
    ValueError: Error when checking : expected embedding_1_input to have 2 dimensions, but got array with shape ()

    Can You please tell me the proper way to preprocess the text. Any help is greatly appreciated.
    Thanks

    • Avatar
      Jason Brownlee June 23, 2017 at 6:40 am #

      Generally, you need to integer encode the words.

      • Avatar
        vivz June 23, 2017 at 7:11 pm #

        Thanks for the reply
        I converted my string like this:
        text = ‘It is a good movie to watch’
        import keras.preprocessing.text
        text = keras.preprocessing.text.one_hot(text, 5000, lower=True, split=” “)
        text = [text]
        text = sequence.pad_sequences(text, 500)
        predictions = loaded_model.predict(text)

        But got the output as: [[ 0.10996077]]
        Shouldn’t it be close to 1?
        Many Thanks

        • Avatar
          Jason Brownlee June 24, 2017 at 8:05 am #

          Sorry, I don’t follow. Why do you expect te output to be 1? What are you predicting?

          • Avatar
            Vivek Vishnoi June 26, 2017 at 9:00 pm #

            What I interpret is that 1 is the label for positive sentiment and since I am using a positive statement to predict I am expecting output to be 1.
            I had made a mistake in the last comment by using model.predict() to get class labels, the correct way to get the label is model.predict_classes() but still, it’s not giving proper class labels.
            So my question is whether I made a mistake in converting text into one-hot vector or is it the right way to do it.
            Many Thanks

          • Avatar
            Jason Brownlee June 27, 2017 at 8:29 am #

            As long as you are consistent in data preparation and in interpretation at the other end, then you should be fine.

  69. Avatar
    sk June 29, 2017 at 4:30 pm #

    Can you do a tutorial for preprocessing text dataset and then passing them as input using word embeddings? Thanks!

  70. Avatar
    tanmay June 30, 2017 at 9:02 pm #

    Can we use sequence labelling problem over continous variable. I have datasets of customer paying their debt within due date, buffer period and beyond buffer period. Basis on this I want to score the customer from good to bad. Is it posible using sequence labelling.

    • Avatar
      Jason Brownlee July 1, 2017 at 6:34 am #

      Perhaps, I’m not sure I understand your dataset. Can you give a one-case example?

  71. Avatar
    Karthik Suresh July 4, 2017 at 7:53 am #

    Hi Jason, great tutorial!

    I have data as follows

    Text Alpha-Numeric Label
    “foo” A1034 A
    “bar” A1234 B

    I have already mapped an LSTM model from Text column to label column. However, I need to add the Alpha-numeric Column with the Text as an additional feature to my LSTM model. How can I do that in Keras?

  72. Avatar
    Sajad July 5, 2017 at 12:32 am #

    Hi, it was really great and I am happy that this tutorial was my first practical project in LSTM. I need to have f-measures, False Positives and AUC instead of “accuracy” in your code. Do you have any idea how to get them?

    Thank you in advance.
    Sajad,

  73. Avatar
    Reihaneh July 10, 2017 at 10:24 am #

    I have a question about built-in embedding layer in Keras.
    I have done word embedding with word2vec model which is working based on the semantic similarity of the words–those in the same context are more similar. I am wondering whether Keras embedding layer is also following the w2v model or it has its own algorithm to map the words into vectors?
    Based on what semantics it map the words to vectors?

  74. Avatar
    William Wong July 11, 2017 at 12:03 pm #

    Hi Jason,
    Excellent article. I am trying to use CNN to model time series data and feed into LSTM for supervised learning. I have a 2d matrix with columns representing previous n-time steps and rows representing the different price levels each time steps visited:
    Price Bar0 Bar1 Bar2 Bar3 Bar4 Bar5 …
    0 0 0 1 1 0 0
    1 1 0 1 1 0 1
    2 1 1 1 1 1 1
    3 1 1 0 1 1 0
    4 0 0 0 0 1 0

    this matrix will represent, price data of:
    High Low
    Bar0 3 1
    Bar1 3 2
    Bar2 2 0
    Bar3 3 0
    Bar4 4 2
    Bar5 2 1

    Could you tell me how to adapt your 1-d CNN to 2-d CNN?

  75. Avatar
    truongtrang July 11, 2017 at 12:56 pm #

    hi Jason,
    Great post for me.
    But I want to ask you about: length vector in Embedded layer, you said “the first layer is the Embedded layer that uses 32 length vectors to represent each word” , why you choose 32 instead of another number like 64 or 128, Can you give me some best practice, or reason for your choose.
    Thanks you so much.

    • Avatar
      Jason Brownlee July 12, 2017 at 9:39 am #

      Trial and error. You could experiment with other representations and see what works best for your problem.

  76. Avatar
    Tursun July 19, 2017 at 3:08 am #

    @Jason,
    “Sequence classification is a predictive modeling problem where you have some sequence of inputs over space or time and the task is to predict a category for the sequence.”
    this is inspiring. I am thinking about to use sequence classification to IRIS dataset.
    Do you think it works ?

    • Avatar
      Jason Brownlee July 19, 2017 at 8:29 am #

      The iris flowers dataset is not a sequence classification problem. It is just a classification problem.

  77. Avatar
    Tursun July 19, 2017 at 6:44 pm #

    @Jason,
    Do you mean that:
    I can not use LSTM for IRIS classification? I am working on IRIS like dataset. So I m exploring all possible classifiers. You have one here in your website. Besides,
    I have tried RBM in SKLearn, it did not work as my inputs are not binary inputs like MNIST dataset (even after SKLearn’s preprocessing.binarizer() function). I think they were wrong to say that RBM In SKLearn works for data in range of [0,1], it only works for 0 and 1.
    (by the way I send you my code to for reference)

    I also have tried probablistic neural net (PNN), which yields only 78% accuracy, low and no way to increase layers of PNN as it is single layer net (from Neupy).
    Now I came to RNN, but you said that.

    • Avatar
      Jason Brownlee July 20, 2017 at 6:19 am #

      No, the iris dataset is not a sequence classification problem and the LSTM would be a bad fit.

  78. Avatar
    Tursun July 20, 2017 at 2:34 pm #

    @Jason,
    What would you suggest ? I need your expert advice.
    I tried RBM in sklearn, it did not work.
    You said ,RNN would not work for it.
    I think, CNN clearly does not work for it.
    Do DBN and VAE left?

    I wish to classify IRIS in 3 different ways. I did one only.

    • Avatar
      Jason Brownlee July 21, 2017 at 9:29 am #

      Consider SVM, CART, and kNN.

      • Avatar
        Tursun July 24, 2017 at 3:58 pm #

        @Jason,
        Thank you. I’ve already tried kNN and SVM . There were good, gave good results.
        I have a feeling that Deep Learning methods yields even better results to my dataset. Do you have other suggessions in Deep Learning! this is my dataset:
        https://www.dropbox.com/s/4xsshq7nnlhd31h/P7_all_Data.csv?dl=0

        • Avatar
          Jason Brownlee July 25, 2017 at 9:33 am #

          You could try a multilayer perceptron neural network.

          • Avatar
            tursun July 25, 2017 at 11:38 pm #

            Jason,
            I did try multi-layer perceptron. Result was good.
            I want to use deep neural net of more than 3 layers.
            What do you think about convolutional neural network?
            I originally think it is impossible. But, now thinking about it again.

          • Avatar
            Jason Brownlee July 26, 2017 at 7:57 am #

            You can do what you wish. CNNs are designed for spatial input and the iris flower dataset does not have a spatial input.

  79. Avatar
    Zachary July 28, 2017 at 6:21 pm #

    Hi Jason, I want ask what is the use of Dropout, It makes the accuracy lower, so does this mean the dropout is bad for machine learning? thank you!

  80. Avatar
    Daniel August 1, 2017 at 10:52 pm #

    Hey Jason! Great Post 🙂 Really helped me in my internship this summer. I just wanted to get your thoughts on a couple things.

    1. I’ve trained with about 400k documents in total and I’m getting an accuracy of ~98%. I always get vary when my model does ‘too’ well. Is that a fair cause-effect due to the enormous dataset ?

    2. When I think of CNN’ing+max_pooling word vectors(Glove), I think of the operation basically meshing the word vectors for 3 words(possibly forming like a phrase representation).Am I right in my thought process ?

    3. I’m still a little unclear on what the LSTM learns. I understand its not a typical seq-2-seq problem, so what do those 100 LSTM units hold ?

    Thanks so much again for the great tutorial! 🙂

    • Avatar
      Jason Brownlee August 2, 2017 at 7:53 am #

      I’m glad to hear that Daniel.

      Maybe you want to test the model on a hold out set to see if the model skill is real or overfit.

      Something like that, pooling does good nonlinear things that may not relate back to word vectors/words cleanly.

      They hold a function of input and prior items in the input sequence. It’s complex for sure.

  81. Avatar
    Amr August 7, 2017 at 7:52 pm #

    Hello Jason,

    I wonder how 100 neurons in the LSTM layer would be able to accept the 500 vectors/words? I thought that the size of the LSTM layer should be equivalent to the length of the input sequence!

    • Avatar
      Jason Brownlee August 8, 2017 at 7:47 am #

      Good question, no the layers do not need to have the same number of units.

      For example, If I had a vector of length 5 as input to a single neuron, then the neuron would have 5 weights, one for each element. We do not need 5 neurons for the 5 input elements (although we could), these concerns are separate and decoupled.

      • Avatar
        Amr August 8, 2017 at 9:55 am #

        Thanks for your reply.
        But here we have already each input as a vector not a scalar! would that mean in this case that each neuron will receive 5 vectors each of them 32 dimensional? so each neuron will have 5*32=160 weights? and if so, what is the advantage of that over having every neuron process only one word/vector?

        • Avatar
          Jason Brownlee August 8, 2017 at 5:09 pm #

          For an MLP, word vectors are concatenated as you say and each neuron gets a lot of inputs.

          LSTMs, on the other hand, treat each word as one input in a sequence and process them one at a time.

          The idea is called “distributed representation” where all neurons get all inputs and they selectively learn different parts to focus on.

          This is key to neural networks.

  82. Avatar
    Sajad August 12, 2017 at 6:31 pm #

    Hi Jason,
    consider we have 500 sequences with 100 elements in each sequence.
    if we do the embedding in a 32 dimensions vector, we will have a 100*32 matrix for each sequence.
    Now assume we are using only a layer of LSTM(20) in our project. I am a bit confused in practice:

    I know that We have a hidden layer with 20 LSTM units in parallel. I want to know how Keras gives a sequence to the model. Does it give the same 32 dimension vectors to all LSTM units at a time in order and an iteration finishes at time [t+100]? (this way I think all units give the same (copy) value after training, and it is equivalent to having only on unit), OR it gives 32dim vectors 20 by 20 to the the model in order and iteration ends at time [t+5]?

    Thank you in advance,
    Sajad

    • Avatar
      Jason Brownlee August 13, 2017 at 9:49 am #

      Good question.

      So, the 100 time steps are passed as input to the model with 500 samples and 1 feature, something like [500, 100, 1].

      The Embedding will transform each time step into a 32 dimensional vector.

      The LSTM will process the sequence one time step at a time, so one 32-dimensional embedding at a time.

      Each memory cell will get the whole input. They all have a go at modeling the problem. An error propagated from deeper layers will encourage the hidden LSTM layer to learn the input sequence in a specific way, e.g. classify the sequence. Each cell will learn something slightly different.

      Does that help?

      • Avatar
        Sajad August 15, 2017 at 12:07 am #

        Thank you for your clear answer.

        1) I am working on malware detection using LSTM, so I have malware activities in a sequence. As another question, I want to know more about Embedding layer in Keras. In my project I have to convert elements into integer numbers to feed Embedding layer of Keras. I guess Embedding is a frozen neural network layer to convert elements of a sequence to a vector in a way that relations between different elements are meaningful, Right? I would like to know if there is any logical issue of using Embedding in my project.

        2) Do you know any references (book, paper, website etc.) for Embedding in Keras (academic/non-academic)? I need to draw a figure describing Embedding training network.

        Thank you for your patience,

        Sajad

        • Avatar
          Jason Brownlee August 15, 2017 at 6:39 am #

          The Embedding has weights that are leared when you fit the model.

          You can use pre-trained weights from a word2vec or glove run if you like. Learning custom weights for your task is often better.

          I have a few posts scheduled on how the learned embedding layer works, that should be out next month. For now, this might be a good place for you to start:
          https://en.wikipedia.org/wiki/Word_embedding

          The Keras Embedding layer are just weights – vectors learned for each word in the input vocab. Very simple to describe.

          • Avatar
            Sajad August 15, 2017 at 11:16 pm #

            Thank you Jason.
            That’s great, I am waiting for your posts on embedding.

          • Avatar
            Jason Brownlee August 16, 2017 at 6:35 am #

            Thanks Sajad.

  83. Avatar
    Saho August 16, 2017 at 12:21 am #

    Hey Jason, this post was great for me.
    As a question I would like to know how to set the number of LSTM units in the hidden layer?

    Is there any relationships between the number of samples (sequences) and the number of hidden units?

    I have 400 sequences with 5000 element in each. How many LSTM units should I use? I know that I should test model with different number of hidden units but I am looking for an upperbound and lowerbound for number hidden units.

    saho,

    • Avatar
      Jason Brownlee August 16, 2017 at 6:37 am #

      There is no analytical way to configure a neural network. I recommend trial and error, grid search, random search or copy configurations from other models.

  84. Avatar
    Maddy August 18, 2017 at 5:17 pm #

    great work ! what if i want to apply this code on simple sentence sequence classification. how can we do that? how we are going to manipulate the data
    .

    thank you

    • Avatar
      Jason Brownlee August 19, 2017 at 5:51 am #

      Sure.

      I would recommend spending time cleaning the data, then integer encode it ready for the model. I recommend an embedding layer on the front of the model.

      • Avatar
        Maddy August 22, 2017 at 8:57 pm #

        thank you … how can i replace imdb data with my own data that is composed of simple sentences? and how can i change the program accordingly?

  85. Avatar
    Irati August 24, 2017 at 5:54 pm #

    Hi Jason! First thanks for your amazing web!

    And now comes the question: In my case I am trying to solve a task classification problem. Each task is described by 57 time series with 74 time steps each. For the training phase I do have 100 task examples of 10 different classes.

    This way, I have created a [100,74,57] input and a [100,1] output with the label for each task.

    This is, I have a multivariate time series to multilabel classification problem.

    What type of learning structure would you suggest? I am aware that I may need to collect/generate more data but I am new both in python and deep learning and I am having some trouble creating a small running example for multivariate ts -> multilabel classification.

    Thanks!

    • Avatar
      Jason Brownlee August 25, 2017 at 6:41 am #

      For multi-class classification, you will need a one hot encoding of your output variable so the dimensions will be [100,10] and then use a softmax activation function in the output layer to predict the outcome probability across all 10 classes.

      For the specific model, try MLPs with sliding window, then maybe some RNNs like LSTMs to see if they can do better.

  86. Avatar
    Cloud August 25, 2017 at 4:49 pm #

    Thanks for your tutorial. My problem is classfication a packet (is captured everytime with many features) whether normal or abnormal. I would like to adapt LSTM to my own problem. My data are matrixes: X_train(4000,41), Y_train(4000,1), X_test(1000,41), Y_test(1000,1) – Y is label. One of 41 feature is time, others are input variables. I think, I have to extract time feature from 41 features, is it correct. Is this process in Keras?
    First, I am confusing how to reshape my data in a meaningful way so that it meets the requirements of the inputs of LSTM layer. I expect my data like this:
    x_train.shape = (4000,1,41) #simple, I set time step=1, later it will be changed > 1 to classify from many packets in time step
    y_train.shape = (4000,1,1)
    How to transform my data to structure above?
    Second, I think, the Embedding layer is not suitable to my problems, is it right?. My model is built:
    model = Sequential()
    model.add(LSTM(64, input_dim=41, input_length=41) # ex, 64 LSTM unints
    model.add(Dense(1, activation=’sigmoid’))
    model.compile(loss=’binary_crossentropy’, optimizer=’adam’, metrics=[‘accuracy’])
    model.fit(X_train, Y_train, epochs=20, batch_size=100)
    I’m new to LSTM, Can you give any advice for my problem. Thank you very much

    • Avatar
      Jason Brownlee August 26, 2017 at 6:41 am #

      It sounds like you have 40K time steps, these would then need to be split into sub-sequences of 100 samples of 400 time steps.

      You would then have input like: [100, 400, 41].
      The input shape would be (400, 41).

      Does that help?

      • Avatar
        Cloud August 27, 2017 at 12:02 pm #

        Thanks Jason. That means batch_size=100. Right? I can have my first layer like this:
        model.add(LSTM(64, input_dim=41, input_length=400) #hidden 1: 64
        Or:
        model.add(LSTM(64, batch_input_shape=(100, 1, 41), stateful=True))
        Which one is correct? How to set time_step in the first code line.
        Can you help me fix that?. Many thanks

        • Avatar
          Jason Brownlee August 28, 2017 at 6:47 am #

          You can set the shape of your data in terms of time steps (x) and features (y) like this:

          input_shape=(x, y)

          • Avatar
            Cloud September 1, 2017 at 1:44 pm #

            Thanks for your enthusiasm,

            I try to build model with my data that I follow your comments, but I get errors:
            timesteps=2
            train_x=np.array([train_x[i:i+timesteps] for i in range(len(train_x)-timesteps)]) #train_x.shape=(119998, 2, 41)
            train_y=np.array([train_y[i:i+timesteps] for i in range(len(train_y)-timesteps)]) #train_y.shape=(119998, 2, 1)

            input_dim=41 #features
            #1.define the network
            model=Sequential()
            model.add(LSTM(100,input_shape=(timesteps,input_dim)))
            model.add(Dense(1,activation=’sigmoid’))
            #2. compile the network
            model.compile(loss=’binary_crossentropy’,optimizer=’adam’,metrics=[‘accuracy’])
            #3. fit the model
            model.fit(train_x,train_y, epochs=100, batch_size=10,)

            Error:
            File “test_data.py”, line 53, in
            model.fit(train_x,train_y, nb_epoch=100, batch_size=10,)
            File “/home/keras/models.py”, line 870, in fit
            initial_epoch=initial_epoch)
            File “/home/keras/engine/training.py”, line 1435, in fit
            batch_size=batch_size)
            File “/home/keras/engine/training.py”, line 1315, in _standardize_user_data
            exception_prefix=’target’)
            File “/home/engine/training.py”, line 127, in _standardize_input_data
            str(array.shape))
            ValueError: Error when checking target: expected dense_1 to have 2 dimensions, but got array with shape (119998, 2, 1)
            May be I have problem with ouput shape? how can I fix?
            Thank you

          • Avatar
            Jason Brownlee September 1, 2017 at 3:28 pm #

            The output of your network expects 1 feature. Reshape y to be (119998, 1).

          • Avatar
            Cloud September 1, 2017 at 3:05 pm #

            Hi Jason,
            I replaced my output shape to:
            train_y=np.array(train_y[:119998) #train_y.shape=(119998, 1)

            Finally, it works!

            I have more question, Do Keras support for implementation on GPU?

            Thanks

          • Avatar
            Jason Brownlee September 1, 2017 at 3:29 pm #

            Glad to hear that.

            Keras runs on top of Theano and TensorFlow. These underlying math libraries provide support for GPUs.

          • Avatar
            cloudy September 13, 2017 at 4:47 pm #

            Hi Jason.

            I think that maybe I was wrong when preparing input data to LSTM.
            I have input and label like this: train_x(4000,41) and train_y(4000,1)
            Before, I used:
            timesteps=2
            train_x=np.array([train_x[i:i+timesteps] for i in range(len(train_x)-timesteps)]) #train_x.shape=(119998, 2, 41)
            train_y=np.array(train_y[:119998) #train_y.shape=(119998, 1)

            ===> It is wrong because rows are overlapped and train_y maybe taken wrong

            Now, I correct like this:
            train_x = reshape(int(train_x.shape[0]/timesteps), timesteps, train_x.shape[1])

            In my data, each instance has multiple features so I want to keep features as it is, means multiple features in the same time.
            Help me correct my misunderstand about input data
            train_y = reshape(int(train_y.shape[0]/timesteps), train_y.shape[1]) # error: IndexError: tuple index out of range ???
            And I concern the time feature is or is not included in input data (because I read a post: https://machinelearningmastery.mystagingwebsite.com/time-series-prediction-lstm-recurrent-neural-networks-python-keras/).
            I read many your articles in machinelearningmastery.com, so I maybe confused

            Many thanks

          • Avatar
            Jason Brownlee September 15, 2017 at 11:58 am #

            Sorry, I’m not sure I follow your sequence prediction problem.

            Can you give me a small example, e.g. one sample?

      • Avatar
        cloudy September 15, 2017 at 2:48 pm #

        My data has n packets, each packet has many features f (one of them is time), example:
        f1 f2 f3 … label
        pkt1 2 3 3 0
        pkt2 1 3 5 1
        pkt3 2 3 2 1
        pkt4 5 3 1 0
        pkt5 5 3 2 1
        ….
        ex: timesteps=2, each subsequence has 2 rows. After shape, like these:
        [[[2 3 3 0]
        [3 5 1 1]]
        [[3 5 1 1]
        [2 3 2 1]]
        …. ]
        or: separate:
        [[[2 3 3 0]
        [3 5 1 1]]
        [[2 3 2 1]
        [5 3 1 0]]
        … ]
        When split label from that input data. I see if timesteps=1, label will match to every rows, easy to get. But if timesteps >1, which label will be taken for matching to each subsequence (on 1st row or 2nd row).
        Can you help me clear that confusion? (2 questions: overlap or separate? and get label)
        Many thanks

  87. Avatar
    Irfan August 27, 2017 at 5:47 pm #

    Hi Jason, nice article. I have one question though. what changes I have to make to do multi-class classification instead of binary classification?

    • Avatar
      Jason Brownlee August 28, 2017 at 6:48 am #

      Good question.

      Change the output layer to have one neuron per class, change the activation function to be softmax on the output layer and change the loss function to be categorical_crossentropy.

      • Avatar
        Irfan August 31, 2017 at 1:32 am #

        Thanks for nice reply. One last question, can I use negative values for LSTM and CNN? I have some data, one of the column has both positive and negative values. How to handle this? Thanks in advance.

        • Avatar
          Jason Brownlee August 31, 2017 at 6:19 am #

          Yes.

          Generally, I would encourage you to rescale data to the range 0-1 prior to passing it to an LSTM layer.

  88. Avatar
    Marco Cheung August 31, 2017 at 2:10 am #

    Hi Jason,

    It seems that I encounter a problem with the line “model.add(LSTM(100))” (OS: MAC)

    Here is the TypeError: Expected int32, got of type ‘Variable’ instead.

    Thank you very much !!!!!!!

    • Avatar
      Jason Brownlee August 31, 2017 at 6:20 am #

      That is a strange error, are you sure it is on that line? It does not make sense.

      Perhaps ensure you have copied all of the lines and that you have the correct spacing/indenting?

  89. Avatar
    Sarada Okazaki August 31, 2017 at 4:42 pm #

    Hi Jason, thanks for your post. it’s really helpful.

    I have some questions, hope you help out.
    1. I’m trying to classify intents for a data set containing comments from user. There are several intents corresponding to comments. But the language in my case is not English. So I understand that I have to build the data set to be similar to imdb’s one. But how can I do it. Do you have any instruction/guidelines to build data set like that.

    2. Aside from data set, I think that I also have to build embedding vector for my own language. How can I do that.

    Thank you in advanced. Hope to hearing from you soon.

    • Avatar
      Jason Brownlee September 1, 2017 at 6:43 am #

      I should have some posts on this soon.

      Generally, you need to clean the data (punctuation, case, vocab), then integer encode it for use with a word embedding. See Keras’ Tokenizer class as a good start.

      The Embedding layer will learn the weights for your data. You can try to train a word2vec model and use the pre-trained weights to get better performance, but I’d recommend starting with a learned embedding layer as a first step.

  90. Avatar
    Alex September 5, 2017 at 11:25 pm #

    Hello, Jason,
    Thank you for the great post.

    Google has it’s NLP API: https://cloud.google.com/natural-language/docs/basics

    You could admit that they give us a polarity of sentiment in the range of (-1, 1). The call it “score”.

    Maybe you have a quick idea about how to do the same output using Keras while sentiment analysis?
    As I understand this is not a classifier problem anymore. Any thoughts?

    • Avatar
      Jason Brownlee September 7, 2017 at 12:46 pm #

      Sure, I have a few posts scheduled on this topic for later in the month/next month.

  91. Avatar
    Don September 10, 2017 at 2:31 am #

    Oops, I sent my reply to the wrong post. Sorry. I fixed it.

  92. Avatar
    Sajad September 11, 2017 at 12:01 am #

    Hi Jason,

    thank you for your nice work in this website.

    My question: In what cases RNN works better than LSTM? I know that LSTM is originated from RNN and attempts to eliminate the problem of vanishing gradient in RNN.. BUT in my case I am using malware behavioral sequence and I got this chart for TPR and FPR: https://imgur.com/fnYxGwK , the figures show TPR and FPR for different number of units in hidden layer.

    Do you know why RNN works better in my project?

  93. Avatar
    Rishi September 11, 2017 at 11:40 pm #

    Hi Jason,

    First off, great tutorial. Love the overall content that you provide.

    I am working through a categorical classification task that involves evaluating a feature that can go as long as 27500 words. My problem is that there are other features that I need to feed into my RNN-LSTM as well. I had thought about combining the long text feature and the other features into one files – features separated by columns of course but I don’t think that will work? Instead, I was think to separate the long text feature into its own file and run that independently through the RNN and then take the other features Can you give me some pointers on how I can go about designed the layers for this challenge I’m facing?

  94. Avatar
    Lin Li September 17, 2017 at 1:22 pm #

    Hi,Dr. Jason Brownlee. Thanks for your amazing web. I’m a start-learner on deep learning. I copy your code and run it, and I encounter a problem when loading imdb dataset. The messages are as follows:

    Traceback (most recent call last):
    File “F:\Study\0-MyProject\Test\SimpleLSTM.py”, line 13, in
    (X_train, y_train),(X_test, y_test) = imdb.load_data(num_words = top_words)
    File “C:\Users\llfor\AppData\Local\Programs\Python\Python35\lib\site-packages\keras\datasets\imdb.py”, line 51, in load_data
    path = get_file(path, origin=’https://s3.amazonaws.com/text-datasets/imdb.npz’)
    File “C:\Users\llfor\AppData\Local\Programs\Python\Python35\lib\site-packages\keras\utils\data_utils.py”, line 220, in get_file
    urlretrieve(origin, fpath, dl_progress)
    File “C:\Users\llfor\AppData\Local\Programs\Python\Python35\lib\urllib\request.py”, line 217, in urlretrieve
    block = fp.read(bs)
    File “C:\Users\llfor\AppData\Local\Programs\Python\Python35\lib\http\client.py”, line 448, in read
    n = self.readinto(b)
    File “C:\Users\llfor\AppData\Local\Programs\Python\Python35\lib\http\client.py”, line 488, in readinto
    n = self.fp.readinto(b)
    File “C:\Users\llfor\AppData\Local\Programs\Python\Python35\lib\socket.py”, line 575, in readinto
    return self._sock.recv_into(b)
    File “C:\Users\llfor\AppData\Local\Programs\Python\Python35\lib\ssl.py”, line 929, in recv_into
    return self.read(nbytes, buffer)
    File “C:\Users\llfor\AppData\Local\Programs\Python\Python35\lib\ssl.py”, line 791, in read
    return self._sslobj.read(len, buffer)
    File “C:\Users\llfor\AppData\Local\Programs\Python\Python35\lib\ssl.py”, line 575, in read
    v = self._sslobj.read(len, buffer)
    TimeoutError: [WinError 10060] 由于连接方在一段时间后没有正确答复或连接的主机没有反应,连接尝试失败。

    Besides, sometimes it just said “fetch failure on https://s3.amazonaws.com/text-datasets/imdb.npz“.
    Is it because imdb data source is not available or network is instability?
    Actually I have manually downloaded the data from https://s3.amazonaws.com/text-datasets/imdb.npz.
    So if I cannot load the data online, how can I deal with the data I’ve downloaded manually to use it?
    I’ve try another code to load data: (X_train, y_train),(X_test, y_test) = imdb.load_data(path = “imdb_full.pkl”), and it’s not work neither.
    I’m looking forward to your reply. Thanks again!

  95. Avatar
    Sara September 21, 2017 at 6:19 pm #

    Hey Jason,

    This is an amazing post. I’m very new to nnets and now I have a question.
    I do not understand the why you have picked LSTM and RNN for this semantic analysis. to be clear I don’t understand where the sequential part that allow us to use RNN and LSTM.
    I’m wondering if you could explain this.
    I also want to know if we can use LSTM for entity extraction (NLP) and where is a good data set to train our model.

  96. Avatar
    max September 24, 2017 at 9:25 am #

    Hi Jason,

    Would Feature Scaling help in this case as well? As the reviews are tokenized, the values can go from low to high depending the max number of words used.

  97. Avatar
    Ziqi September 26, 2017 at 6:05 am #

    Thanks for sharing both the model and the code also your enthusiasm in answering all the questions. I built my model for sentence classification based on your cnn+lstm one and it is working well. I am relatively new to neural nets and hence I am trying to learn to interpret how different layer interact, specifically, what is the data shape like. So, given the example above, suppose our dataset has 1000 movie reviews, using a batch size of 64, for each batch, please correct me:

    embedding layer: OUTPUT – 64 (sample size) x 500 (words) x 32 (features per word)
    conv1d: INPUT – as above; OUTPUT – for *each word*, 32 feature maps x (32/3) features, where 3 is kernel size.
    maxpooling1d: INPUT – as above; OUTPUT – for *each word*, and for *each feature map*, a 32/3/2 feature vector
    lstm: INPUT – this is where I struggle to understand… 64 is the sample size, 500 is the steps, so should be 64 x 500 x FEATURES, but is FEATURES=32/3/2, or 32 x (32/3/2) where the first 32 is the feature maps from conv1d?
    OUTPUT – for *each sample*, a 100-dim feature vector

    • Avatar
      Jason Brownlee September 26, 2017 at 2:58 pm #

      Sounds good.

      I would encourage you to try a suite of models on your problem to see what works best.

  98. Avatar
    Oshin Patwa September 28, 2017 at 7:36 pm #

    Hello, read your blog found it really help full however could you please guide me to a code sample as to how exactly hot encode my text for training, I have 20,000 reviews to train.
    Or can i just using hashing technique where every word is signifying an integer?
    So something like ;
    I find the store good.
    I find good.

    Is represented as ;
    1 2 3 4 5
    1 2 5

    As representing every character with an integer would be exhaustive i think!
    And then i can probably run the further steps for padding e.t.c?
    In this case how will i predict new sentences having some new words?
    (which makes me re think should i assign every character to an integer) if so could you please show me a sample?

    • Avatar
      Jason Brownlee September 29, 2017 at 5:04 am #

      I recommend using an integer encoding for text.

      Further, you can count the occurrence of each word, and reduce the size of the vocabulary to only the most frequent words.

      I will have posts on how to do this on the blog soon.

  99. Avatar
    Trialcritic September 30, 2017 at 2:48 am #

    I tried to create a model for text summarization in seq2seq with keras. Did not work well. The prediction shows the top words by frequency. I tried blacklisting the top words in english (‘a’, ‘an’, ‘the’ etc). The results were still not good. Some said that in 2016 that keras was not good for text summarization then. Wonder what is missing.

    • Avatar
      Jason Brownlee September 30, 2017 at 7:46 am #

      It is a hard problems that requires at least 1M examples and a large model.

      I have a tutorial on text summarization scheduled for around Christmas.

  100. Avatar
    ASAD October 2, 2017 at 3:32 pm #

    Hello sir i am asad. i want to know how to load data set which is in .text file and text data of movie review. then how i can use it in recurrent neural network?
    please tell me the complete procedure. remember data i have is locally in my computer

  101. Avatar
    Gili October 5, 2017 at 2:43 am #

    Hi Jason,

    Thanks for the post. I just applied this approach in our use case which is quite similar to movie review sentiment classification. The accuracy of the model is very good ~94%.

    BUT

    I replaced all the frequency with random numbers and to my surprise the accuracy is still very good. (~94). The labels are the same as well.

    Do you have any idea about this?
    Thanks,

    • Avatar
      Jason Brownlee October 5, 2017 at 5:26 am #

      What do you mean exactly, I don’t follow what you changed?

  102. Avatar
    Argie October 6, 2017 at 3:51 am #

    Hey Jason,

    amazing work and so up to date.

    I would like to ask you, do you think this sequence classification model could be used to predict a category for a really large sequence of numbers, instead of words ??

  103. Avatar
    Emily October 6, 2017 at 2:10 pm #

    Hi Jason,

    I’m really puzzled. I seem to be the only one who can’t run the code you provided.
    I’m using python 2.7, Keras-2.0.8, Tensorflow-0.12. I got an error at the line
    model.add(LSTM(100)).

    TypeError: expected int32, got list containing Tensors of type’_Message’ instead.

    Can you please let me know which python, keras, tensorflow versions you’re using?

    Thank you!

    • Avatar
      Jason Brownlee October 7, 2017 at 5:47 am #

      It looks like you need to upgrade your version of TensorFlow to at least 1.3 or better.

  104. Avatar
    nas October 9, 2017 at 10:33 pm #

    Hi jason,

    I would like to let you know that I have written my first ML code following your step by step ML project. I am using a nonlinear dataset(nsl-kdd). My dataset is in CSV format. I want to model and train my dataset using lstm.
    For MNIST dataset I have a code,

    import tensorflow as tf
    from tensorflow.examples.tutorials.mnist import input_data
    from tensorflow.python.ops import rnn, rnn_cell
    mnist = input_data.read_data_sets(“/tmp/data/”, one_hot = True)

    hm_epochs = 3
    n_classes = 10
    batch_size = 128
    chunk_size = 28
    n_chunks = 28
    rnn_size = 128

    My question is according to my dataset how I can define the chunk size, number of chunks, and rnn size as new variables for my dataset.
    As I am very much new so really confuse how I can model and train my dataset to find accuracy using lstm. I want to use LSTM as a classifier. I don’t know my questions to you is correct or not.
    I really appreciate your help.

    • Avatar
      Jason Brownlee October 10, 2017 at 7:45 am #

      Sorry, I don’t have examples of working with tensorflow directly. I cannot give you good advice.

  105. Avatar
    Nandini October 10, 2017 at 9:41 pm #

    is it possible to written same code for Simple neural networks for text processing?
    is it that best way to use keras for text processing or otherwise any other libraries are present to implement Neural networks for text processing.?

  106. Avatar
    Vaibhav October 13, 2017 at 8:10 am #

    Hi Jason,

    This post and the comments have helped me immensely. Thanks! I am question regarding this sentence –
    “The IMDB review data does have a one-dimensional spatial structure in the sequence of words in reviews and the CNN may be able to pick out invariant features for good and bad sentiment. This learned spatial features may then be learned as sequences by an LSTM layer.”

    I am not able to visualize how CNN will process words. Also, Could u please throw some light on spatial structure for words?

    • Avatar
      Jason Brownlee October 13, 2017 at 2:54 pm #

      Words are ordered in a sentence or paragraph, this is the spatial structure.

  107. Avatar
    Nandini October 13, 2017 at 4:21 pm #

    For sequence to sequence mining ,which neural networks is better for good performance?

  108. Avatar
    nandini October 16, 2017 at 10:29 pm #

    i have the read about sequence to sequence learning in neural networks,we need to LSTMS layers for it,first one is for input sequence and second is for output sequnce,here we have to send our input sequnce vector in a reverse order to LSTM layers.

    what my doubt ,is LSTM layer will take the input in a reverse order or we have to give input in reverse order

  109. Avatar
    nandini October 17, 2017 at 4:15 pm #

    For sequence to sequence regression model ,output node i have to give one or i have to give maximum variable length of output vectors,.
    finally we will get output vectors,how we have to convert to this output vectors to text ,is there any method is available in Keras ,like in embedding layer we are doing strings to vectors conversion,like vectors to integers conversion.

    • Avatar
      Jason Brownlee October 18, 2017 at 5:29 am #

      To output text, you use a softmax to output the prob of each char or word, then take the argmax to get an integer and map the integer back to a value in your vocabulary.

      I will have examples of how to do this on the blog soon.

  110. Avatar
    ambika October 17, 2017 at 7:05 pm #

    problem statement: my model should be generate the script file according to given instrctions using sequence to sequence modelling using keras..

    examples:input: take two intergers from console,add two integers,print the addition of two integers on console.
    output: like python script file for above input instructions.

    Please give me any point of contact for this problem,how can i go further to solve this problem.

  111. Avatar
    nandini October 23, 2017 at 8:05 pm #

    Is it possible to use machine learning to translate natural language into a programming language, say, C, PHP, or Python? please suggsest me any libraries available to do this task.

  112. Avatar
    Tamir Bennatan November 8, 2017 at 9:51 am #

    Dr. Brownlee, I can’t tell you how much I value the content on your site! So accessible, to the point, and enriching. You’re changing the world. Thank you.

  113. Avatar
    glorsh66 November 15, 2017 at 2:38 am #

    Great tutorial!
    But how can i use this network to classify several different classes? For instance 14 classes.

    Am i correct – that i just need to change – model.add(Dense(1, activation=’sigmoid’))
    to model.add(Dense(13, activation=’sigmoid’))

    or i need use Conv2D?

    And how can i transform my text data to word embedding (such as IMDB uses).

    • Avatar
      Jason Brownlee November 15, 2017 at 9:54 am #

      To change the example to work for a multi-class classification problem, change the output layer to have one neuron per class, and use the categorical_crossentropy loss function.

      • Avatar
        glorsh66 November 19, 2017 at 7:43 pm #

        Thanks for your great example!

        I got some troubles with overfitting my model –
        For the training i am using, text data in Russian language (language essentially doesn’t matter,because text contains a lot of special professional terms, and sadly to employ existing word2vec won’t be an option.)

        I have such parameters of training data – Maximum lengths of an article – 969 words Size of vocabulary – 53886 Amount of labels – 12 (sadly they are distributed quite unevenly, for instance i have first label – and have around 5000 examples of this, and second contains only 1500 examples.)

        Amount of training data set – Only 9876 entries. I’ts the biggest problem, because sadly i can’t increase size of the training set by any means (only way out to wait another year☻, but even it will only make twice the size of training date, and even double amount is’not enough)

        Here is my code –

        x, x_test, y, y_test = train_test_split(x_, y_, test_size=0.1)
        x_train, x_dev, y_train, y_dev = train_test_split(x, y, test_size=0.1)

        embedding_vecor_length = 100

        model = Sequential()
        model.add(Embedding(top_words, embedding_vecor_length, input_length=max_review_length))
        model.add(Conv1D(filters=32, kernel_size=3, padding=’same’, activation=’relu’))
        model.add(MaxPooling1D(pool_size=2))
        model.add(keras.layers.Dropout(0.3))
        model.add(Conv1D(filters=32, kernel_size=4, padding=’same’, activation=’relu’))
        model.add(MaxPooling1D(pool_size=2))
        model.add(keras.layers.Dropout(0.3))
        model.add(Conv1D(filters=32, kernel_size=5, padding=’same’, activation=’relu’))
        model.add(MaxPooling1D(pool_size=2))
        model.add(keras.layers.Dropout(0.3))
        model.add(Conv1D(filters=32, kernel_size=7, padding=’same’, activation=’relu’))
        model.add(MaxPooling1D(pool_size=2))
        model.add(keras.layers.Dropout(0.3))
        model.add(Conv1D(filters=32, kernel_size=9, padding=’same’, activation=’relu’))
        model.add(MaxPooling1D(pool_size=2))
        model.add(keras.layers.Dropout(0.3))
        model.add(Conv1D(filters=32, kernel_size=12, padding=’same’, activation=’relu’))
        model.add(MaxPooling1D(pool_size=2))
        model.add(keras.layers.Dropout(0.3))
        model.add(Conv1D(filters=32, kernel_size=15, padding=’same’, activation=’relu’))
        model.add(MaxPooling1D(pool_size=2))
        model.add(keras.layers.Dropout(0.3))
        model.add(LSTM(200,dropout=0.3, recurrent_dropout=0.3))
        model.add(Dense(labels_count, activation=’softmax’))
        model.compile(loss=’categorical_crossentropy’, optimizer=’adam’, metrics=[‘accuracy’])

        print(model.summary())

        model.fit(x_train, y_train, epochs=25, batch_size=30)
        scores = model.evaluate(x_, y_)
        I tried different parameters and it gets really high accuracy in training (up to 98%) But i really performs badly on test set. Maximum that i managed to achieve was around 74%, usual result something around 64% And the best result was achieved with small embedding_vecor_length and small batch_size.

        I know – that my test set is only 10 percent of training test, and overall data-set is the biggest problem, but i want to find a way around this problem.

        So my questions are – 1) Is it correctly builded model for text classification purpose? (it works) Do i need to use simultaneous convolution an merge results instead? I just don’t get how the text information doesn’t get lost in the process of convolution with different filter sized (like in my example) Can you explain hot the convolution works with text data? There are mainly articles about image recognition..

        2)i obliviously got a problem with overfitting my model. How can i make the performance better? I have already added Dropout layers. What can i do next?

        3)May be i need something different? I mean pure RNN without convolution?

  114. Avatar
    Alex November 21, 2017 at 6:10 am #

    How would you do sequence classification if there were no words involved? For example, I want to classify a sequence that looks like [0, 0, 0.4, 0.5, 0.9, 0, 0.4] either to be a 0 or a 1, but I don’t know what format to get my data in to feed into an LSTM.

  115. Avatar
    Thabet November 21, 2017 at 9:27 am #

    Hi,

    What if we need to classify a sequence of numbers, is this example applicable and do i need the embedding layer? and can you refer to an example that you have on the blog or on other places so i can understand more? Thanks

  116. Avatar
    Kasun Karunarathna November 21, 2017 at 4:57 pm #

    Hi.

    Nice tutorial buddy. Please can you show how to use this LSTM network with a Binary classification problem (like your tutorial on neural networks – prima indian diabetics).

    Please can you help me..

  117. Avatar
    pirate_shady November 25, 2017 at 7:53 am #

    Hi,

    I tried sequence classification, but I am not able to add LSTM layer on top of embedded layer.
    Did you faced a similar issue ?
    Here is the problem that I am facing : https://stackoverflow.com/questions/47464256/unable-to-add-lstm-layer-on-top-of-embedded-layer-on-gpu-keras-with-tensorflow

  118. Avatar
    Michael December 4, 2017 at 7:05 pm #

    Hi Jason,

    Thanks for the tutorial. Can you clarify however, when you say:
    “We can see that we achieve similar results to the first example although with less weights and faster training time.”

    When you mean less weights, what are you referring to exactly? cause when you run model.summary the model with Convolution layer has 216k parameters vs. 213k parameters in the original model, technically there are more parameters to train.

    Do you mean to say that with the convolution + pooling layers the input into the LTSM layer is from 250 hidden layer nodes vs 500 in the original model? I’m guessing the LTSM layer is harder to train which leads to the reduced fitting time?

    Thanks

  119. Avatar
    saleh December 15, 2017 at 5:14 am #

    Hi
    I tried text classification. I have data sets of tweets and I have to train a model to determine the writer was happy or sad. I used your “Simple LSTM for Sequence Classification” code . but the thing is that I want to know before using your code what should I replace with words .
    previously I used ” sequences = tokenizer.texts_to_sequences(tweets_dict[“train”])” to convert text to vector and after that I used your code . Is it correct?

  120. Avatar
    Zuratex Testosterone December 16, 2017 at 7:33 am #

    Real informative and fantastic anatomical structure of subject matter,
    now that’s user friendly (:.

  121. Avatar
    Zuratex Complex December 19, 2017 at 7:28 pm #

    Do you mind if I quote a few of your posts as long as I provide
    credit and sources back to your website? My blog site is in the exact same area of interest as yours and my users would really benefit
    from a lot of the information you provide here.
    Please let me know if this okay with you. Many thanks!

    • Avatar
      Jason Brownlee December 20, 2017 at 5:42 am #

      Sure, as long as you do not copy posts verbatim (e.g. just small quotes) and you credit the source clearly.

  122. Avatar
    Aayush Sinha December 21, 2017 at 9:41 pm #

    Very nice article. Can you tell me how to make single prediction ? Like for a given text we have to make prediction.

    e.g. “Very nice movie” as single input to give “positive” output.

  123. Avatar
    Eduardo Andrade December 23, 2017 at 5:21 am #

    Hi Jason,

    In my problem I have made an one hot encoding with a vector size of 256 for each sample (10000 samples). The embedding layer is necessary? What I have done as the first layer:

    model.add(LSTM(256, input_shape=(10000, 256), activation = ‘relu’))

    You did model.add(LSTM(100)) too. It has any relation with the embedding_vecor_length? It has to be greater than embedding_vecor_length = 32? I am using 256 but without any idea. Thank you.

    • Avatar
      Jason Brownlee December 24, 2017 at 4:49 am #

      Perhaps try your model with and without the embedding to see how it impacts model skill.

  124. Avatar
    chanchal suman December 26, 2017 at 6:07 pm #

    Thank you sir, for providing the very nice tutorial. I am working on sequence classification. My data set contains 41 features, each of them are float and Y is 5 class .
    Q.1 Do i need embedding ?
    Q.2 I have normalized the data , so do i need top_words ?
    Q.3 What could be embedding vector length?
    Q.4 What could be the maximum review length ?
    Q.5 All example contains 41 features, do i need padding ?
    I am not very clear about the embedding layer. Your suggestions would be great for me.

  125. Avatar
    Aparup Khatua December 28, 2017 at 1:40 am #

    I have one small doubt. You are using the IMDB data set. If i want to use a different data set then how to pre-process the data set for preparing the word integer matrix to execute the following:

    # load the dataset but only keep the top n words, zero the rest
    top_words = 5000
    (X_train, y_train), (X_test, y_test) = imdb.load_data(num_words=top_words)
    # truncate and pad input sequences
    max_review_length = 500
    X_train = sequence.pad_sequences(X_train, maxlen=max_review_length)
    X_test = sequence.pad_sequences(X_test, maxlen=max_review_length)

    My data (two columns in .csv format: tweet and CLASS/manual annotation) looks like this:

    president obama says the us needs to do more to help stop the ebola outbreak from becoming a global crisis actdont talk RISK
    i was upset and angry that thomasericduncan lied about his exposure to ebola and put us all at risk for catching this deadly disease RISK
    ebola is transmitted through blood and saliva so i better stop punching my haters so hard and smooching all these gorgeous b TRANSMISSION
    he got the best treatment availablebetter than liberia and i am still not convinced he didnt know he had ebolarace card again TREATMENT
    obama and cdc said they will fight ebola in africa news today ebola deaths rise sharply when exactly will they fight it tcot TREATMENT
    fuck this is really tough dont know if i have the mind and guts to deal with death and ebola every day of work RISK
    something more serious needs to be done about this ebola shit the airport and the town he was in needs to be quarantined im sick of being PREVENTION
    if you have ebola symptoms or know someone who does please hug and kiss mr obama show him respect he appreciates tcot SYMPTOM
    u can only get it if u have frequent contact with bodily fluids of someone who has ebola and is showing symptoms TRANSMISSION

  126. Avatar
    anupam January 13, 2018 at 4:15 pm #

    Hi Jason, I would like to know after building a model using ML or DL how to use that model which can automatically classify the untagged corpus? is there any example?

    Regards

  127. Avatar
    AbuBakr January 18, 2018 at 11:51 pm #

    Hi Jason,
    Thank you for your great effort,

    I am trying to use Keras LSTM, but I dont know the data format.

    I have an FAQ list, the questions are considered samples and the answers are considered classes. So how can I use the lstm classifier for this dataset.

    thanks in advance

  128. Avatar
    Ray January 19, 2018 at 1:21 am #

    Hi Jason,

    I have a classification problem that has two types of input.

    The first input is the sequence of online activities, which I can use the above mentioned models to deal with.
    The second input is a vector of the time difference (minute) between each activity and last activity. In this case, I want my model consider the time impact of the decision as well.

    My question is what is the best way to merge the second input to the above models?
    What I have done is use a LSTM layer on the second input as well and merge the output with the above one. But it seems not right, because the second input is continuous value rather than the discrete index.

    So what kind of layer should I use to apply on these real value vectors?

    • Avatar
      Jason Brownlee January 19, 2018 at 6:34 am #

      Perhaps try a suite of models and see what works best.

      Perhaps a multi-headed model might be a good approach.

  129. Avatar
    Ray January 19, 2018 at 2:54 am #

    Hi Jason,

    How to take two types of inputs in this model?
    One is a sequence of online activities, the second input is the time different between each activity and last activity.
    Should I use a multimodal layer to merge them?
    Should I process the second input with LSTM layer as well? (It seems not right as the element of this vector is the continuous value)

    Cheers,

    R

      • Avatar
        Ray Li January 24, 2018 at 8:47 am #

        Thanks for your response. I understand how to merge two layers, but my question is, in which layer shall I merge the online activities with their recency scores?

        For example I can apply a lstm layer on the online activities, and then concatenate the output of lstm layer (the last hidden state output) with the sequence of their recency scores. But it doesn’t make sense.

        Or I can multiply the embedding output with the sequence of their recency scores, then put the output into the lstm layer. But I don’t know whether this right or not.

        Would please give me some suggestion?

        Thanks,

        Ray

        • Avatar
          Jason Brownlee January 24, 2018 at 10:00 am #

          My intuitions might lead you down a false path. Perhaps try a few designs and see what works best for your specific problem.

          There is more art than science in this at the moment.

          • Avatar
            Ray January 27, 2018 at 1:48 am #

            Fair enough. But thanks a lot. I will use this as the excuse when I have to talk with my professor about progress 😀

  130. Avatar
    Ismael January 28, 2018 at 6:01 am #

    Hi,

    I can implemented a LSTM to generate labels from videos? for example use youtube2text?

    thanks

  131. Avatar
    Shayan February 1, 2018 at 6:04 pm #

    Can I use this to for Lip Reading? I’m thinking of classifying a sequence of frames to a particular word. Like the entire video will be classified as hello, how, etc.

    Can you tell me how to go about it?

    • Avatar
      Jason Brownlee February 2, 2018 at 8:07 am #

      Sounds great. Sorry, I don’t have any examples of lip reading models.

  132. Avatar
    auro tripathy February 5, 2018 at 5:03 am #

    Hi Jason: Your teaching skills far exceed many ‘big’ teaching names.

    As an experiment, I added one line to the model in your “simple” LSTM example.

    model.layers[0].trainable = True # to train (back-prop) thru the embedding layer

    While the trainable parameter count went up significantly (from 53,301 to 1,660,501), the accuracy did not change.

    Would like your thoughts on the experiment.

    • Avatar
      Jason Brownlee February 5, 2018 at 7:48 am #

      The layer is trainable by default. The assignment should have had no effect. I’m surprised.

  133. Avatar
    Clock ZHONG February 9, 2018 at 2:21 am #

    Jason,
    Thanks for you excellent explanation.
    I’ve done some modification on your codes in oder to get higher accuracy on the test data, finally, I could get accuracy 88.60% on test dataset.
    My question is, besides what I’ve done on changing thoese hyper parameters (just like a blind man touching an elephant), what else we could to do improve the prediction accuracy on the test data? Or how to conquer the overfitting to get higher prediction accuray on test data? I found it’s very easy to get higher prediction accuracy on training data, but it’s astonishingly hard to make the same result happen on the test dataset(or validation dataset). The codes I modified is as following if anyone else need them as reference:

    Thanks!

    Clock ZHONG

      • Avatar
        Clock ZHONG February 10, 2018 at 4:05 am #

        Thanks, Jason, that article you wrote, I already carefully read it half year ago. It’s also perfect, but I still feel we have no a clear guide on how to impove the prediction accuracy on test dataset.
        We always say:
        1. more training and testing data could get better performance, but it’s not always.
        2. more deeper layers in the neural network could get better performance, but it’s still not always;
        3. Fine tune hyper parameters could get better performance, yes ,it is, but let alone the time comsumption, this kind of work could only imporve the performance very very little (according to my experience.)
        4. Try more other architecture neural network algorithms. Yes, sometimes this could work, but soon we’ll get to the upper-limit again. and face the same problem at once: how to impove it then?.
        Conquering overfitting is really an interesting but difficult work in neural network, I feel we could find some better working ways to fix this problem in the future.
        I still appreciate your articles and reply. Have a happy weekend.

        Thanks

        Clock ZHONG

        • Avatar
          Jason Brownlee February 10, 2018 at 9:00 am #

          Yes, it is hard and empirical. That is the nature of the job.

          There are no clear answers and no one can tell you how to get the best result for a given dataset. You must discover it.

  134. Avatar
    Shabnam February 9, 2018 at 5:41 pm #

    Thanks a lot Jason for your great post. I have difficulty of understanding how LSTM can remember long-term dependencies. Or maybe, I misunderstood the meaning of “remembering dependencies”. Does it remember different parts within a specific training data or among different training data?

    For example, if we have 100 training data, does it learn from 81st data by remembering previous training data?

    Thanks a lot for your time and help in advance,

  135. Avatar
    Morty February 25, 2018 at 7:25 pm #

    Jason:
    Great article! It helps me a lot.
    However, I don’t understand why dropout is considered to play a positive role while reducing the accuracy rate.

    • Avatar
      Jason Brownlee February 26, 2018 at 6:04 am #

      It can help in general, this this post we are demonstrating how to implement it.

  136. Avatar
    vishnu February 27, 2018 at 3:03 am #

    hello,
    Thanks for the article.could you provide an idea on how to apply LSTM for handwriten images recognition.I have a dataset of handwriten alphabets as images of size 50*50.
    It would also be helpful if i could know how Lstm helps handwriten text recognition
    Thank you,

  137. Avatar
    Soumaya February 28, 2018 at 4:34 pm #

    Thank you for this great work! Can we apply it for french language?

  138. Avatar
    Johannes March 15, 2018 at 10:28 am #

    Hi,
    great article. I have a rather fundamental question. As I understand it, each sample is here a sequence of the lenght of “max_review_length”. However, if I have a one dimensional sequence, each sample is part of the sequence. My question is basically, how to tell the algorithm which dimension the sequence takes place.

    Here, we feed in samples which are not part of the sequence themselves but they contain the sequence. But in other use cases it seems like we feed in samples in a sequence, and the samples themselves form the sequence. And we can even feed in a sequence of multiple dimensions, like multiple paralell time series, which is only a sequence in the first dimension.

    I am a bit confused about this, in my mind the algorithm should only recognize the sequence along one dimension, would be great if you could clarify.

    Thanks

    • Avatar
      Jason Brownlee March 15, 2018 at 2:48 pm #

      Not sure I follow.

      Perhaps this post will make inputs to the LSTM more clear:
      https://machinelearningmastery.mystagingwebsite.com/reshape-input-data-long-short-term-memory-networks-keras/

      • Avatar
        Johannes March 15, 2018 at 7:32 pm #

        Ok, I will try to clarify. Say we have a sequence of 5 values. We can pass the sequence in one by one, shape (5,1,1) or all 5 points in one go (1,5,1) as a vector of lenght 5. However, are both of these considered a sequence?

        In my mind, the first one is a sequence of 5, while the second is 5 parallel sequences of lenght 1. This is relevant because in the example of sentiment, we have N samples of lenght “max_lenght”, ie shape (N, max_lenght, 1). Or maybe (N, max_lenght, embedding_dim) if we use embeddings.

        If the sequence is in the first dimension, ie that of N, then LSTM doesnt make sence because there should be no sequential relationship between different reviews.

        Thanks

        • Avatar
          Jason Brownlee March 16, 2018 at 6:13 am #

          No, the first is 5 sequence the second is 1 sequence. Regardless, LSTMs process only one time step of data as input at a time.

          One batch is comprised of 1 or more sequences (samples, first dimension).

          Weight updates occur at the end of each batch at which time internal state is cleared. This means, there is knowledge across sequences. Or can be if that is desired.

          • Avatar
            Johannes March 16, 2018 at 7:24 am #

            Ok, I get it. Thanks for clarifying. Keep up the good work.

          • Avatar
            Jason Brownlee March 16, 2018 at 2:22 pm #

            No problem.

  139. Avatar
    Case March 15, 2018 at 2:43 pm #

    Hi Jason,

    I just start learning ML and trying some sample projects on keras. This post is a really good example to follow.

    I have a question about the classification problem. Right now, I am trying a two-class sequence classification problem. I followed this tutorial to build a model with loss function as binary cross entropy. Then I change the output layer to have 2 units, change the loss function to categorical cross entropy and change the y_training to one hot encoding. I expect these two methods give me the same accuracy but actually, the categorical one seems to be more accurate. Do you have any idea of why this happens? From my understanding, binary cross entropy is the same with 2-class categorical cross entropy so these two methods should give me the same result.

    Another problem. I read another post on your website and change the input layer to LSTM. Then I truncate the training data. I use the full training data to do validation. The truncated training data gives me a higher accuracy when validating than the model using the full training data. I use binary cross entropy method here. This is not what I expect. I am also wondering how to decide the type of the input layer?

    I really appreciate it if you could spend any time answering my question.

    • Avatar
      Jason Brownlee March 15, 2018 at 2:53 pm #

      It might allow the model to be more expressive (e.g. more weights in the calculation of the output).

      Not sure I understand the second question, perhaps you can give a very short example?

  140. Avatar
    Sardarkhan March 19, 2018 at 9:25 pm #

    Would this model is good for predicting that user has perfom this activity or not.? because i want to develope a model that predict that user has performed this activity or not.i want to train the model on user activity like jumping and test whether the user jumping or not.can this model help me out or do you have any code for this.?thanks seeking for help.regards sardar khan.

    • Avatar
      Jason Brownlee March 20, 2018 at 6:18 am #

      Perhaps try it and see.

      • Avatar
        Sardar March 20, 2018 at 3:36 pm #

        Can you give me an example of this.

        • Avatar
          Jason Brownlee March 21, 2018 at 6:30 am #

          Sorry, I do not have a worked example of your problem.

  141. Avatar
    Ashwin March 23, 2018 at 2:23 pm #

    I am not able to clearly understand how exactly is binary classification happening here? The following is the questions that I am trying to figure out:

    For classification, is the final output by the final word in LSTM being given to the single neuron dense layer ? If so, in another one of your post related to “Text generation using LSTM” you seem to be creating an output dense layer with number of neurons equal to the number of words in the vocabulary. But in case of text generation you need the output such that a given memory unit predict the next appropriate word. Then how is a dense layer exactly being connected to the LSTM layer and how exactly is it working(since the LSTM layer seems to give only the final output of final word)?? Please help me both these question

    • Avatar
      Ankita March 23, 2018 at 4:41 pm #

      Yes Jason, this is a question that even I am troubled with. Can you please explain how the dense layer is “CONNECTED” with the LSTM layer in these two different situtations(“Sequence classification” and “Text generation”).

      Thank you in advance

      Ankita

    • Avatar
      Jason Brownlee March 24, 2018 at 6:20 am #

      This example is classifying sequences of words as a sentiment good/bad.

      It is different from generating text (outputting a sequence of words).

      Does that help?

      • Avatar
        Ashwin March 25, 2018 at 12:03 am #

        Thank you Jason for your reply.

        But can you explain how exactly the connection between the LSTM layer and the dense layer differs in both the situations ??

  142. Avatar
    ahmed April 6, 2018 at 4:29 am #

    Hi ..
    nice work .. but how could we enter single review and get its prediction ?

    • Avatar
      Jason Brownlee April 6, 2018 at 6:36 am #

      You must prepare the single input as you would any training data.

      Here’s some pseudocode that will help:

  143. Avatar
    Adrian April 9, 2018 at 7:33 pm #

    Hi Jason,
    Thanks for the great post. I’m trying to implement a classifier like yours, but training on different data (logfiles) with another input shape. I got several lines of data, each with 9 features, each padded to a MAX_FEATURE_LEN. This works fine for LSTM layers, but as soon as i add the Embedding or the Dense layer, i get an error like: Error when checking target: expected dense_1 to have 2 dimensions, but got array with shape (2000, 9, 256)

    My current model:

    features = 9
    MAX_FEATURE_LEN = 256
    model = Sequential()
    model.add(Embedding(file_len(TRAIN_PATH), features, input_length=MAX_FEATURE_LEN))
    model.add(Dropout(0.2))
    model.add(LSTM(100, return_sequences=True))
    model.add(Dropout(0.2))
    model.add(Dense(1, activation='sigmoid'))
    model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])

    I’ve tried several things and it works for LSTMs, so i don’t get what distinguishes them from Dense layers input_shape-wise

    Thank you in advance
    Adrian

  144. Avatar
    SchwarzLin April 19, 2018 at 12:05 pm #

    Great post and a very readable guide on LSTM-CNN using Keras.
    Recently I’m working on a binary classification task which takes real numbers data from multi sensors. I was inspired by your post and wonder if it is possible that I arrange these data into a image-like matrix, which each row is a vector from one sensor and several rows for data from different sensors and then using model like LSTM or CNN or LSTM+CNN from your post to classify the data.
    Do you think it is feasible for model to learn or ? Thanks for your post again~

    • Avatar
      Jason Brownlee April 19, 2018 at 2:50 pm #

      Perhaps multiple 1-d CNNs would make more sense?

      I would recommend trying it rather than thinking too much about whether it is feasible, e.g. Keras is so easy that you could prototype it in a few minutes.

  145. Avatar
    Sachin April 23, 2018 at 2:46 am #

    Nice tutorial, Jason. It got me started with using LSTMs in Keras!
    Are there any thumb rules for how many LSTM units to use for a classification problem? Does the length of the input sequence have any bearing number on this number?

    • Avatar
      Jason Brownlee April 23, 2018 at 6:19 am #

      Good question.

      No good heuristics for configuring the number of units or layers. No relationship between input length and number of units in the hidden layer.

      I recommend careful and systematic experimentation to see what works best for your specific dataset.

  146. Avatar
    jeremy rutman April 29, 2018 at 7:21 pm #

    nb_words has been replaced by num_words

  147. Avatar
    jeremy rutman April 29, 2018 at 7:42 pm #

    also nb_epoch was replaced by epochs

  148. Avatar
    amul May 8, 2018 at 4:01 pm #

    Rookie Query:Can this model predict certain pattern of the sequence like x,x^2,x^3,sin(x),etc all the combination of these sequene?

    • Avatar
      Jason Brownlee May 9, 2018 at 6:10 am #

      A model could perhaps be trained to learn those sequences.

  149. Avatar
    Anam May 17, 2018 at 12:57 am #

    Dear Jason,
    Kindly can you help me in how to “upload my own dataset in keras” because I want to work on my own dataset. Thanks for your time.

  150. Avatar
    Anam May 18, 2018 at 1:14 am #

    Dear Jason,
    The keras contain predefined datasets like “imdb”,”cifar” etc.I want to know can I include my own dataset into keras dataset.

    • Avatar
      Jason Brownlee May 18, 2018 at 6:26 am #

      You can load your data into numpy arrays and start using it with Keras.

      I have many examples of this on the blog for CSV data and text data.

  151. Avatar
    Yuheng May 19, 2018 at 6:50 am #

    I am a bit confused of how the LSTM is trained.
    What is the input to the LSTM at each timestamp, is it the whole review (a 500 x 32 matrix?) or a word ( 32 dimension vector)?
    What does a LSTM do in each epoch?
    And how is the 100 neurons in the LSTM used? Can we use only 1 neuron for the job since it is recurrent?

    Many thanks!

  152. Avatar
    chhavvi June 8, 2018 at 1:06 am #

    i have a data set of 25000 length and i choose top 2500 length and consider it as x_train but i am confused with embedding layer:argument – vocab size should be what .. if i choose 2500 then remaining vocab are not including in this and giving the error

    InvalidArgumentError: indices[23,2433] = 80188 is not in [0, 80000)
    [[Node: embedding_59/embedding_lookup = GatherV2[Taxis=DT_INT32, Tindices=DT_INT32, Tparams=DT_FLOAT, _class=[“loc:@training_42/Adam/Assign_2″], _device=”/job:localhost/replica:0/task:0/device:CPU:0″](embedding_59/embeddings/read, embedding_59/Cast, training_42/Adam/gradients/embedding_59/embedding_lookup_grad/concat/axis)]]”

    and
    cannot download the data by this code line :
    (X_train, y_train), (X_test, y_test) = imdb.load_data(num_words=top_words)
    error showing name or service is not known .
    please help asap

    • Avatar
      Jason Brownlee June 8, 2018 at 6:16 am #

      Perhaps try posting your code and error to stackoverflow?

  153. Avatar
    Namrata June 8, 2018 at 3:54 pm #

    Hi Jason,

    I have 32 sentence blocks of 500 words each to pass to an LSTM after using a pretrained word2vec model to get embeddings of 400 words each. How can I achieve this so that the 32 features are learnt simultaneously?

    Thanks!
    Namrata

  154. Avatar
    Ehsan June 21, 2018 at 4:07 pm #

    Hi,

    In pad_sequences, dtype of output is int32 by default. Shouldn’t we change it to float32 if we are feeding in word vectors?

    Thanks

    • Avatar
      Jason Brownlee June 21, 2018 at 4:59 pm #

      No, feeding the int mapping of words to the mapping is what we want, unless I misunderstand your question.

  155. Avatar
    mohamed tarek June 27, 2018 at 1:36 pm #

    after finishing the model testing it gave an 84% accuracy
    however when i tried to predict sentences using this code :
    text = ‘It is a bad movie to watch’
    text = preprocessing.text.one_hot(text, 5000, lower=True, split=’ ‘)
    text = [text]
    text = preprocessing.sequence.pad_sequences(text, 500)
    predictions = model.predict(text)
    print(predictions)

    the result was 0.90528411
    and when i change the sentence to ‘It is really a good movie to watch’
    the prediction was 0.88954359

    so is there`s a problem with code of prediction or i missed up the training

  156. Avatar
    Matt July 4, 2018 at 3:15 pm #

    Hi Jason,
    Great work and splendid efforts! Really appreciate.

    I am interested in sequence classification to analyse malwares using rnn-lstm and Tensorflow. While there are a couple of sources, I always find your blogs very readable and easily comprehensible. Hence, request you to come up with a blog on ‘Sequence Classification using RNN-LSTM in Tensorflow.’

  157. Avatar
    Anam Habib July 10, 2018 at 3:45 pm #

    Dear Jason,
    I want to know that in deep learning( RNNLSTM) models what should be the difference between training and testing accuracy in order to develop a good fit model.

    and kindly tell me that my model is a good fit or not.

    In [10]: model.fit(X_train, Y_train, epochs = 7, batch_size=batch_size, verbose = 2)
    In [11]: score,acc = model.evaluate(X_test, Y_test, verbose = 2, batch_size = batch_size)
    print(“score: %.2f” % (score))
    print(“acc: %.2f” % (acc))
    Epoch 1/7
    1109s – loss: 0.6918 – acc: 0.5056
    Epoch 2/7
    971s – loss: 0.6269 – acc: 0.7041
    Epoch 3/7
    693s – loss: 0.3696 – acc: 0.8639
    Epoch 4/7
    594s – loss: 0.1743 – acc: 0.9388
    Epoch 5/7
    534s – loss: 0.0699 – acc: 0.9800
    Epoch 6/7
    473s – loss: 0.0276 – acc: 0.9950
    Epoch 7/7
    472s – loss: 0.0148 – acc: 0.9963
    Out[10]:
    score: 0.62
    acc: 0.82

    Thanx for your help.

  158. Avatar
    Anam July 12, 2018 at 9:31 am #

    Dear Jason,
    I have a query that this accuracy

    # Final evaluation of the model
    scores = model.evaluate(X_test, y_test, verbose=0)
    print(“Accuracy: %.2f%%” % (scores[1]*100))

    is the predicted accuracy of the model?

  159. Avatar
    Tejaswini July 26, 2018 at 10:13 am #

    HI Jason,

    Thanks for the tutorial it was really helpful.

    I have question,for example I am dealing in total with 500 messages.These messages are grouped into certain patterns.some times 6 messages make one pattern A.And some times next 3 messages make one Pattern B.I need to classify the patterns in that 500 messages.

    I trained model with LSTM given input shape of pattern containing highest number of messages and padded other patterns.I used sliding window concept and used multi label classification.

    While testing when i give a file with 150 messages,During sliding the window ,some time non of the patterns may occur in that window but lstm model is classifying it as some known pattern.So how to overcome this issue.

    Thanks in advance.

    • Avatar
      Jason Brownlee July 26, 2018 at 2:23 pm #

      Perhaps you can have a “no pattern” output for those cases and train the model on them?

      • Avatar
        Tejaswini July 26, 2018 at 3:31 pm #

        Appreciate your reply Jason . There are so many unknown patterns when compared to known patterns if i have to train with unknown class too.So it is facing class imbalance problem and always giving output as unknown class.

        • Avatar
          Jason Brownlee July 27, 2018 at 5:46 am #

          We only train the model on data where we know the output.

  160. Avatar
    jorge July 28, 2018 at 2:05 pm #

    Dear Jason

    Thanks for the tutorial do you have other example of tutorial that use Convolution lstm on time series dataset?

    Thanks

  161. Avatar
    Raja August 7, 2018 at 6:56 pm #

    Nice explanation!
    How do I construct a vocabulary just as like as imdb dataset format.
    Can you give some form of pseudo code?

    Thanks

  162. Avatar
    Anam August 19, 2018 at 8:19 pm #

    Dear Sir,
    I want to know that what are the parameters or factors of the CNN model that allows the CNN+LSTM architecture to produces an accuracy of 86.36%.In other words, factors affecting the accuracy of the model when using the CNN model. Thanks…

  163. Avatar
    Rashid August 25, 2018 at 2:22 am #

    Dear Jason
    First thanks a lot for your effort. I just start learning different algorithms and your post helps me a lot.
    I follow your LSTM post where I tried y_pred = model.predict(X_test) .
    But it gives me continuous vale rather 0 or 1. Where do i need to change for binary output. Thanks

    I wish you a happy time.
    Best
    Rashid

  164. Avatar
    hugh August 25, 2018 at 6:11 am #

    Sorry Im new to NN but can I use this to identify if a sentence is lewd or non-lewd (gut says yes) just need confirmation

    • Avatar
      Jason Brownlee August 26, 2018 at 6:17 am #

      Start by collecting a dataset with sentences where you know their label.

  165. Avatar
    Pickler August 28, 2018 at 6:52 pm #

    Awesome content, thanks for sharing!

    Should this be used for, let’s say, classifying weather patterns of historical data (not for prediction; e.g. classified as ‘rain’ based on a labeled training set etc.) due to the sequential nature of such data, or would you think simpler support vector classification methods can still model sequential data to an extent?

    • Avatar
      Jason Brownlee August 29, 2018 at 8:07 am #

      I recommend testing a suite of methods in order to discover what works best for your specific problem.

  166. Avatar
    Nicola September 2, 2018 at 1:54 am #

    Hi Jason, thanks for the great article! I am not too sure I understand why we need the embedding layer? What if we simple feed the network with the original matrix (padded):

    [0 0 0 … 12 33 421]
    [0 0 0 … 1 654 211]

    Why does the embedding help?

    • Avatar
      Jason Brownlee September 2, 2018 at 5:32 am #

      You can learn more about the benefit of embedding layers here:
      https://machinelearningmastery.mystagingwebsite.com/what-are-word-embeddings/

      • Avatar
        Nicola September 5, 2018 at 1:32 am #

        Thanks! Actually, it would make no sense to feed the original matrix, where from what I understand, the order of the words matters. If we use another approach, such as CountVectorizer (from sci-kit learn), can we avoid the embedding layer and directly starts with the LSTM layer?

        • Avatar
          Jason Brownlee September 5, 2018 at 6:42 am #

          Sure, you can feed sequences of integers (tokenized words) directly to the LSTM.

  167. Avatar
    Ishay September 3, 2018 at 8:41 am #

    Hi Jason,

    I have learned a lot from the post.
    Regarding the LSTM layer, I am having hard time understanding the dimensionality of input vs output. I read a lot about the unit layers and how they work and I understand the math, but on the higher level I am getting confused.

    The input for the LSTM is 500 by 32 after embedding. What exactly is the output of each LSTM unit, if we receive an output of a vector in size of n units (100)?
    I had the wrong impression earlier that each unit produce a vector of 32 in this case, and then you end up with a matrix of 32 by 100.

    Can you ease explain the LSTM dynamics that generates this output?

    • Avatar
      Jason Brownlee September 3, 2018 at 1:36 pm #

      An LSTM takes a sequence as input and produces a single value as output.

      If you have a layer of 100 nodes, each will receive the entire sequence as input and output one value, therefore a vector of length 100.

      Does that help?

      • Avatar
        Ishay September 3, 2018 at 3:45 pm #

        Hi,

        Thanks for the quick reply 🙂

        In many places I see that the nodes output a vector (usually called h(t)). This is what I don’t understand.

        • Avatar
          Jason Brownlee September 4, 2018 at 6:03 am #

          Yes, LSTMs output a vector with one value for each node at the end of the sequence. The refer to this as h or hidden state.

  168. Avatar
    Alberto September 17, 2018 at 8:28 pm #

    Hello,
    thanks again for your blog. I am guessing why you are using binary crossentropy. Is it not supossed that this dataset is laballed with star reviews from 1 to 10?
    Any post of a text classsifier using categorical crossentropy?
    Thanks a lot.
    Kind regards

  169. Avatar
    Dan September 19, 2018 at 12:50 am #

    Hi Jason! Can you explain why you have not used you Series to Supervised function here? I thought for all sequential problems you need to convert to that format, or is that only for time series, i.e. weather prediction?

    • Avatar
      Jason Brownlee September 19, 2018 at 6:21 am #

      This is a text classification problem where the data was already prepared.

  170. Avatar
    Guru October 10, 2018 at 5:15 pm #

    I was working on same kind data set where I converted my text data to vectors using Bag of words . Can I use same model??

  171. Avatar
    pablo October 31, 2018 at 5:04 am #

    Nice tutorial! Does the embedding preserve the order of the words?
    so the sentence “don’t I like bikes” will not be the same as “I don’t like bikes”.

    • Avatar
      Jason Brownlee October 31, 2018 at 6:31 am #

      The nature of the embedding can capture the similarity between “bike” and “bikes”, if your training data contains usage of both.

  172. Avatar
    LSTM_newbie November 7, 2018 at 7:01 am #

    nice post! I’m still a little confused about using metrics=[“accuracy”] though and wondering if you could help. Suppose we have an LSTM with prediction problem being single-label multi-class, several time steps, and each LSTM layer has return_sequences=True. Then the “predictions” are one class for each time step, i.e. each prediction is a list where len(list) = len(time_steps). In this case, what does “accuracy” mean? Is it the binary accuracy of getting *each* time step prediction *entirely* correct? For example, if the true label is [1, 3, 2, 1] and the predicted label is [1, 3, 2, 2] would the error be equal to 1 since the prediction is not exactly equal to the true label?

    • Avatar
      Jason Brownlee November 7, 2018 at 2:44 pm #

      It would be accuracy for each output timestep which might not be appropriate. You might want to manually evaluate the performance of the predictions.

  173. Avatar
    Jean-Baptiste November 7, 2018 at 9:41 pm #

    Hello Jason,
    Thank you for this tutorial.
    I am trying to use the trained network to predict the sentiment of one imdv review.
    so I tried
    prediction = model.predict(x_test[0])
    I was expecting to get len(prediction) = 1
    but I get len(prediction) = 80
    80 is the maxlen used to pad the input sequence.

    So I am confused.
    I would greatly appreciate some insight on this.
    Thank you very much Jason

    • Avatar
      Jason Brownlee November 8, 2018 at 6:07 am #

      I think the shape of the one sample was not what the model expected. Perhaps reshape it?

  174. Avatar
    Igor November 9, 2018 at 7:28 am #

    Hi
    I’m trying to obtain pure CNN model, but seems, lack of expertise beats me. Using your blog I’ve constructed model like this:

    top_words = 5000
    max_review_length = 500
    embedding_vecor_length = 32
    model = Sequential()
    model.add(Embedding(top_words, embedding_vecor_length, input_length=max_review_length))
    model.add(Conv1D(filters=32, kernel_size=3, padding=’same’, activation=’relu’))
    model.add(MaxPooling1D(pool_size=2))
    model.add(Flatten())
    #model.add(Dense(32, activation=’relu’))
    model.add(Dense(1, activation=’softmax’))
    model.compile(loss=’binary_crossentropy’, optimizer=’adam’, metrics=[‘accuracy’])
    model.fit(X_train, y_train, epochs=3, batch_size=64)

    But I’m getting 50% accuracy:
    25000/25000 [==============================] – 5s 190us/step – loss: 7.9712 – acc: 0.5000
    Accuracy: 50.00%

    Please direct me, and show my errors.
    With respect,
    Igor

    • Avatar
      Jason Brownlee November 9, 2018 at 2:00 pm #

      Perhaps the model requires tuning to the problem?

  175. Avatar
    Bahar November 15, 2018 at 7:11 am #

    Thanks. It was very helpful.

    Just a question:

    As far as I know, the validation set should be differ from the test set.
    But in: model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=3, batch_size=64)
    It seems you showed the test set as validation set!

    Would you please explain?

    • Avatar
      Jason Brownlee November 15, 2018 at 11:27 am #

      Yes, I reused the test set to keep the example simple.

  176. Avatar
    Sridhar Srinivasan November 19, 2018 at 2:42 am #

    Hi Jason,

    I have a dataset which has time(Unix timestamp) and few device level features to predict a specific status of the device, can I use these features directly to make a prediction using LSTM, or is there an alternative way to weigh time?

  177. Avatar
    Shreyas November 28, 2018 at 9:28 pm #

    Hi Jason, can you please post a picture of the network ?