Develop Your First Neural Network in Python With Keras Step-By-Step

Keras is a powerful easy-to-use Python library for developing and evaluating deep learning models.

It wraps the efficient numerical computation libraries Theano and TensorFlow and allows you to define and train neural network models in a few short lines of code.

In this post, you will discover how to create your first neural network model in Python using Keras.

Let’s get started.

  • Update Feb/2017: Updated prediction example so rounding works in Python 2 and Python 3.
Tour of Deep Learning Algorithms

Develop Your First Neural Network in Python With Keras Step-By-Step
Photo by Phil Whitehouse, some rights reserved.

Tutorial Overview

There is not a lot of code required, but we are going to step over it slowly so that you will know how to create your own models in the future.

The steps you are going to cover in this tutorial are as follows:

  1. Load Data.
  2. Define Model.
  3. Compile Model.
  4. Fit Model.
  5. Evaluate Model.
  6. Tie It All Together.

This tutorial has a few requirements:

  1. You have Python 2 or 3 installed and configured.
  2. You have SciPy (including NumPy) installed and configured (e.g. via Anaconda).
    1. If you need help, see the post Python Ecosystem for Machine Learning.
  3. You have Keras and a backend (Theano or TensorFlow) installed and configured.
    1.  If you need help, see the post Introduction to Python Deep Learning with Keras.

Create a new file called keras_first_network.py and type or copy-and-paste the code into the file as you go.

Beat the Math/Theory Doldrums and Start using Deep Learning in your own projects Today, without getting lost in “documentation hell”

Deep Learning With Python Mini-CourseGet my free Deep Learning With Python mini course and develop your own deep nets by the time you’ve finished the first PDF with just a few lines of Python.

Daily lessons in your inbox for 14 days, and a DL-With-Python “Cheat Sheet” you can download right now.   

Download Your FREE Mini-Course  

 

1. Load Data

Whenever we work with machine learning algorithms that use a stochastic process (e.g. random numbers), it is a good idea to set the random number seed.

This is so that you can run the same code again and again and get the same result. This is useful if you need to demonstrate a result, compare algorithms using the same source of randomness or to debug a part of your code.

You can initialize the random number generator with any seed you like, for example:

Now we can load our data.

In this tutorial, we are going to use the Pima Indians onset of diabetes dataset. This is a standard machine learning dataset from the UCI Machine Learning repository. It describes patient medical record data for Pima Indians and whether they had an onset of diabetes within five years.

As such, it is a binary classification problem (onset of diabetes as 1 or not as 0). All of the input variables that describe each patient are numerical. This makes it easy to use directly with neural networks that expect numerical input and output values, and ideal for our first neural network in Keras.

Download the Pima Indian dataset from the UCI Machine Learning repository and place it in your local working directory, the same as your python file. Save it with the file name:

You can now load the file directly using the NumPy function loadtxt(). There are eight input variables and one output variable (the last column). Once loaded we can split the dataset into input variables (X) and the output class variable (Y).

We have initialized our random number generator to ensure our results are reproducible and loaded our data. We are now ready to define our neural network model.

2. Define Model

Models in Keras are defined as a sequence of layers.

We create a Sequential model and add layers one at a time until we are happy with our network topology.

The first thing to get right is to ensure the input layer has the right number of inputs. This can be specified when creating the first layer with the input_dim argument and setting it to 8 for the 8 input variables.

How do we know the number of layers and their types?

This is a very hard question. There are heuristics that we can use and often the best network structure is found through a process of trial and error experimentation. Generally, you need a network large enough to capture the structure of the problem if that helps at all.

In this example, we will use a fully-connected network structure with three layers.

Fully connected layers are defined using the Dense class. We can specify the number of neurons in the layer as the first argument, the initialization method as the second argument as init and specify the activation function using the activation argument.

In this case, we initialize the network weights to a small random number generated from a uniform distribution (‘uniform‘), in this case between 0 and 0.05 because that is the default uniform weight initialization in Keras. Another traditional alternative would be ‘normal’ for small random numbers generated from a Gaussian distribution.

We will use the rectifier (‘relu‘) activation function on the first two layers and the sigmoid function in the output layer. It used to be the case that sigmoid and tanh activation functions were preferred for all layers. These days, better performance is achieved using the rectifier activation function. We use a sigmoid on the output layer to ensure our network output is between 0 and 1 and easy to map to either a probability of class 1 or snap to a hard classification of either class with a default threshold of 0.5.

We can piece it all together by adding each layer. The first layer has 12 neurons and expects 8 input variables. The second hidden layer has 8 neurons and finally, the output layer has 1 neuron to predict the class (onset of diabetes or not).

3. Compile Model

Now that the model is defined, we can compile it.

Compiling the model uses the efficient numerical libraries under the covers (the so-called backend) such as Theano or TensorFlow. The backend automatically chooses the best way to represent the network for training and making predictions to run on your hardware, such as CPU or GPU or even distributed.

When compiling, we must specify some additional properties required when training the network. Remember training a network means finding the best set of weights to make predictions for this problem.

We must specify the loss function to use to evaluate a set of weights, the optimizer used to search through different weights for the network and any optional metrics we would like to collect and report during training.

In this case, we will use logarithmic loss, which for a binary classification problem is defined in Keras as “binary_crossentropy“. We will also use the efficient gradient descent algorithm “adam” for no other reason that it is an efficient default. Learn more about the Adam optimization algorithm in the paper “Adam: A Method for Stochastic Optimization“.

Finally, because it is a classification problem, we will collect and report the classification accuracy as the metric.

4. Fit Model

We have defined our model and compiled it ready for efficient computation.

Now it is time to execute the model on some data.

We can train or fit our model on our loaded data by calling the fit() function on the model.

The training process will run for a fixed number of iterations through the dataset called epochs, that we must specify using the nb_epoch argument. We can also set the number of instances that are evaluated before a weight update in the network is performed, called the batch size and set using the batch_size argument.

For this problem, we will run for a small number of iterations (150) and use a relatively small batch size of 10. Again, these can be chosen experimentally by trial and error.

This is where the work happens on your CPU or GPU.

5. Evaluate Model

We have trained our neural network on the entire dataset and we can evaluate the performance of the network on the same dataset.

This will only give us an idea of how well we have modeled the dataset (e.g. train accuracy), but no idea of how well the algorithm might perform on new data. We have done this for simplicity, but ideally, you could separate your data into train and test datasets for training and evaluation of your model.

You can evaluate your model on your training dataset using the evaluate() function on your model and pass it the same input and output used to train the model.

This will generate a prediction for each input and output pair and collect scores, including the average loss and any metrics you have configured, such as accuracy.

6. Tie It All Together

You have just seen how you can easily create your first neural network model in Keras.

Let’s tie it all together into a complete code example.

Running this example, you should see a message for each of the 150 epochs printing the loss and accuracy for each, followed by the final evaluation of the trained model on the training dataset.

It takes about 10 seconds to execute on my workstation running on the CPU with a Theano backend.

Note: If you try running this example in an IPython or Jupyter notebook you may get an error. The reason is the output progress bars during training. You can easily turn these off by setting verbose=0 in the call to model.fit().

7. Bonus: Make Predictions

The number one question I get asked is:

After I train my model, how can I use it to make predictions on new data?

Great question.

We can adapt the above example and use it to generate predictions on the training dataset, pretending it is a new dataset we have not seen before.

Making predictions is as easy as calling model.predict(). We are using a sigmoid activation function on the output layer, so the predictions will be in the range between 0 and 1. We can easily convert them into a crisp binary prediction for this classification task by rounding them.

The complete example that makes predictions for each record in the training data is listed below.

Running this modified example now prints the predictions for each input pattern. We could use these predictions directly in our application if needed.

Summary

In this post, you discovered how to create your first neural network model using the powerful Keras Python library for deep learning.

Specifically, you learned the five key steps in using Keras to create a neural network or deep learning model, step-by-step including:

  1. How to load data.
  2. How to define neural network in Keras.
  3. How to compile a Keras model using the efficient numerical backend.
  4. How to train a model on data.
  5. How to evaluate a model on data.

Do you have any questions about Keras or about this tutorial?
Ask your question in the comments and I will do my best to answer.

Related Tutorials

Are you looking for some more Deep Learning tutorials with Python and Keras?

Take a look at some of these:

Frustrated With Your Progress In Deep Learning?

 What If You Could Develop Your Own Deep Nets in Minutes

...with just a few lines of Python

Discover how in my new Ebook: Deep Learning With Python

It covers self-study tutorials and end-to-end projects on topics like:
Multilayer PerceptronsConvolutional Nets and Recurrent Neural Nets, and more...

Finally Bring Deep Learning To
Your Own Projects

Skip the Academics. Just Results.

Click to learn more.

154 Responses to Develop Your First Neural Network in Python With Keras Step-By-Step

  1. Saurav May 27, 2016 at 11:08 pm #

    The input layer doesn’t have any activation function, but still activation=”relu” is mentioned in the first layer of the model. Why?

    • Jason Brownlee May 28, 2016 at 6:32 am #

      Hi Saurav,

      The first layer in the network here is technically a hidden layer, hence it has an activation function.

      • sam Johnson December 21, 2016 at 2:44 am #

        Why have you made it a hidden layer though? the input layer is not usually represented as a hidden layer?

        • Jason Brownlee December 21, 2016 at 8:41 am #

          Hi sam,

          Note this line:

          It does a few things.

          • It defines the input layer as having 8 inputs.
          • It defines a hidden layer with 12 neurons, connected to the input layer that use relu activation function.
          • It initializes all weights using a sample of uniform random numbers.

          Does that help?

  2. Geoff May 29, 2016 at 6:18 am #

    Can you explain how to implement weight regularization into the layers?

  3. KWC June 14, 2016 at 12:08 pm #

    Import statements if others need them:

    from keras.models import Sequential
    from keras.layers import Dense, Activation

    • Jason Brownlee June 15, 2016 at 5:49 am #

      Thanks.

      I had them in Part 6, but I have also added them to Part 1.

  4. Aakash Nain June 29, 2016 at 6:00 pm #

    If there are 8 inputs for the first layer then why we have taken them as ’12’ in the following line :

    model.add(Dense(12, input_dim=8, init=’uniform’, activation=’relu’))

    • Jason Brownlee June 30, 2016 at 6:47 am #

      Hi Aakash.

      The input layer is defined by the input_dim parameter, here set to 8.

      The first hidden layer has 12 neurons.

  5. Joshua July 2, 2016 at 12:04 am #

    I ran your program and i have an error:
    ValueError: could not convert string to float:
    what could be the reason for this, and how may I solve it.
    thanks.
    great post by the way.

    • Jason Brownlee July 2, 2016 at 6:20 am #

      It might be a copy-paste error. Perhaps try to copy and run the whole example listed in section 6?

  6. cheikh brahim July 5, 2016 at 7:40 pm #

    thank you for your simple and useful example.

  7. Nikhil Thakur July 6, 2016 at 6:39 pm #

    Hello Sir, I am trying to use Keras for NLP , specifically sentence classification. I have given the model building part below. It’s taking quite a lot time to execute. I am using Pycharm IDE.

    batch_size = 32
    nb_filter = 250
    filter_length = 3
    nb_epoch = 2
    pool_length = 2
    output_dim = 5
    hidden_dims = 250

    # Build the model

    model1 = Sequential()

    model1.add(Convolution1D(nb_filter, filter_length ,activation=’relu’,border_mode=’valid’,
    input_shape=(len(embb_weights),dim), weights=[embb_weights]))

    model1.add(Dense(hidden_dims))
    model1.add(Dropout(0.2))
    model1.add(Activation(‘relu’))

    model1.add(MaxPooling1D(pool_length=pool_length))

    model1.add(Dense(output_dim, activation=’sigmoid’))

    sgd = SGD(lr=0.1, decay=1e-6, momentum=0.9, nesterov=True)

    model1.compile(loss=’mean_squared_error’,
    optimizer=sgd,
    metrics=[‘accuracy’])

  8. Andre Norman July 15, 2016 at 10:40 am #

    Hi Jason, thanks for the awesome example. Given that the accuracy of this model is 79.56%. From here on, what steps would you take to improve the accuracy?

    Given my nascent understanding of Machine Learning, my initial approach would have been:

    Implement forward propagation, then compute the cost function, then implement back propagation, use gradient checking to evaluate my network (disable after use), then use gradient descent.

    However, this approach seems arduous compared to using Keras. Thanks for your response.

    • Jason Brownlee July 15, 2016 at 10:52 am #

      Hi Andre, indeed Keras makes working with neural nets so much easier. Fun even!

      We may be maxing out on this problem, but here is some general advice for lifting performance.
      – data prep – try lots of different views of the problem and see which is best at exposing the structure of the problem to the learning algorithm (data transforms, feature engineering, etc.)
      – algorithm selection – try lots of algorithms and see which one or few are best on the problem (try on all views)
      – algorithm tuning – tune well performing algorithms to get the most out of them (grid search or random search hyperparameter tuning)
      – ensembles – combine predictions from multiple algorithms (stacking, boosting, bagging, etc.)

      For neural nets, there are a lot of things to tune, I think there are big gains in trying different network topologies (layers and number of neurons per layer) in concert with training epochs and learning rate (bigger nets need more training).

      I hope that helps as a start.

      • Andre Norman July 18, 2016 at 7:19 am #

        Awesome! Thanks Jason =)

  9. Romilly Cocking July 21, 2016 at 12:31 am #

    Hi Jason, it’s a great example but if anyone runs it in an IPython/Jupyter notebook they are likely to encounter an I/O error when running the fit step. This is due to a known bug in IPython.

    The solution is to set verbose=0 like this

    # Fit the model
    model.fit(X, Y, nb_epoch=40, batch_size=10, verbose=0)

  10. Anirban July 23, 2016 at 10:20 pm #

    Great example. Have a query though. How do I now give a input and get the output (0 or 1). Can you pls give the cmd for that.
    Thanks

    • Jason Brownlee July 24, 2016 at 6:53 am #

      You can call model.predict() to get predictions and round on each value to snap to a binary value.

      For example, below is a complete example showing you how to round the predictions and print them to console.

  11. Anirban July 23, 2016 at 10:52 pm #

    I am not able to get to the last epoch. Getting error before that:
    Epoch 11/150
    390/768 [==============>……………]Traceback (most recent call last):.6921

    ValueError: I/O operation on closed file

    I could resolve this by varying the epoch and batch size.

    Now to predict a unknown value, i loaded a new dataset and used predict cmd as below :
    dataset_test = numpy.loadtxt(“pima-indians-diabetes_test.csv”,delimiter=”,”) –has only one row

    X = dataset_test[:,0:8]
    model.predict(X)

    But I am getting error :
    X = dataset_test[:,0:8]

    IndexError: too many indices for array

    Can you help pls.

    Thanks

    • Jason Brownlee July 24, 2016 at 6:55 am #

      I see problems like this when you run from a notebook or from an IDE.

      Consider running examples from the console to ensure they work.

      Consider tuning off verbose output (verbose=0 in the call to fit()) to disable the progress bar.

  12. David Kluszczynski July 28, 2016 at 12:42 am #

    Hi Jason!
    Loved the tutorial! I have a question however.
    Is there a way to save the weights to a file after the model is trained for uses, such as kaggle?
    Thanks,
    David

  13. Alex Hopper July 29, 2016 at 5:45 am #

    Hey, Jason! Thank you for the awesome tutorial! I’ve use your tutorial to learn about CNN. I have one question for you… Supposing I want to use Keras to classicate images and I have 3 or more classes to classify, How could my algorithm know about this classes? You know, I have to code what is a cat, a dog and a horse. Is there any way to code this? I’ve tried it:

    target_names = [‘class 0(Cats)’, ‘class 1(Dogs)’, ‘class 2(Horse)’]
    print(classification_report(np.argmax(Y_test,axis=1), y_pred,target_names=target_names))

    But my results are not classifying correctly.

    precision recall f1-score support
    class 0(Cat) 0.00 0.00 0.00 17
    class 1(Dog) 0.00 0.00 0.00 14
    class 2(Horse) 0.99 1.00 0.99 2526

    avg / total 0.98 0.99 0.98 2557

  14. Anonymouse August 2, 2016 at 11:28 pm #

    This was really useful, thank you

    I’m using keras (with CNNs) for sentiment classification of documents and I’d like to improve the performance, but I’m completely at a loss when it comes to tuning the parameters in a non-arbitrary way. Could you maybe point me somewhere that will help me go about this in a more systematic fashion? There must be some heuristics or rules-of-thumb that could guide me.

    • Jason Brownlee August 3, 2016 at 8:09 am #

      I have a tutorial coming out soon (next week) that provide lots of examples of tuning the hyperparameters of a neural network in Keras, but limited to MLPs.

      For CNNs, I would advise tuning the number of repeating layers (conv + max pool), the number of filters in repeating block, and the number and size of dense layers at the predicting part of your network. Also consider using some fixed layers from pre-trained models as the start of your network (e.g. VGG) and try just training some input and output layers around it for your problem.

      I hope that helps as a start.

  15. Shopon August 14, 2016 at 5:04 pm #

    Hello Jason , My Accuracy is : 0.0104 , but yours is 0.7879 and my loss is : -9.5414 . Is there any problem with the dataset ? I downloaded the dataset from a different site .

    • Jason Brownlee August 15, 2016 at 12:36 pm #

      I think there might be something wrong with your implementation or your dataset. Your numbers are way out.

  16. mohamed August 15, 2016 at 9:30 am #

    after training, how i can use the trained model on new sample

    • Jason Brownlee August 15, 2016 at 12:36 pm #

      You can call model.predict()

      See an above comment for a specific code example.

  17. Omachi Okolo August 16, 2016 at 10:21 pm #

    Hi Jason,
    i’m a student conducting a research on how to use artificial neural network to predict the business viability of potential software projects.
    I intend to use python as a programming language. The application of ANN fascinates me but i’m new to machine learning and python. Can you help suggest how to go about this.
    Many thanks

  18. Agni August 17, 2016 at 6:23 am #

    Dear Jeson, this is a great tutorial for beginners. It will satisfy the need of many students who are looking for the initial help. But I have a question. Could you please light on a few things: i) how to test the trained model using test dataset (i.e., loading of test dataset and applied the model and suppose the test file name is test.csv) ii) print the accuracy obtained on test dataset iii) the o/p has more than 2 class (suppose 4-class classification problem).
    Please show the whole program to overcome any confusion.
    Thanks a lot.

  19. Doron Vetlzer August 17, 2016 at 9:29 am #

    I am trying to build a Neural Network with some recursive connections but not a full recursive layer, how do I do this in Keras?

    • Doron Vetlzer August 17, 2016 at 9:31 am #

      I could print a diagram of the network but what I want Basically is that each neuron in the current time frame to know only its own previous output and not the output of all the neurons in the output layer.

    • Jason Brownlee August 17, 2016 at 10:04 am #

      I don’t know off hand Doron.

      • Doron Veltzer August 23, 2016 at 2:28 am #

        Thanks for replying though, have a good day.

  20. sairam August 30, 2016 at 8:49 am #

    Hello Jason,

    This is a great tutorial . Thanks for sharing.

    I am having a dataset of 100 finger prints and i want to extract minutiae of 100 finger prints using python ( Keras). Can you please advise where to start? I am really confused.

  21. CM September 1, 2016 at 4:23 pm #

    Hi Jason,

    Thanks for the great article. But I had 1 query.

    Are there any inbuilt functions in keras that can give me the feature importance for the ANN model?

    If not, can you suggest a technique I can use to extract variable importance from the loss function? I am considering an approach similar to that used in RF which involves permuting the values of the selected variable and calculating the relative increase in loss.

    Regards,
    CM

  22. Kamal September 7, 2016 at 2:09 am #

    Dear Jason, I am new to Deep learning. Being a novice, I am asking you a technical question which may seem silly. My question is that- can we use features (for example length of the sentence etc.) of a sentence while classifying a sentence ( suppose the o/p are +ve sentence and -ve sentence) using deep neural network?

    • Jason Brownlee September 7, 2016 at 10:27 am #

      Great question Kamal, yes you can. I would encourage you to include all such features and see which give you a bump in performance.

  23. Saurabh September 11, 2016 at 12:42 pm #

    Hi, How would I use this on a dataset that has multiple outputs? For example a dataset with output A and B where A could be 0 or 1 and B could be 3 or 4 ?

  24. Tom_P September 17, 2016 at 1:47 pm #

    Hi Jason,
    The tutorial looks really good but unfortunately I keep getting an error when importing Dense from keras.layers, I get the error : AttributeError: module ‘theano’ has no attribute ‘gof’
    I have tried reinstalling Theano but it has not fixed the issue.

    Best wishes
    Tom

    • Jason Brownlee September 18, 2016 at 7:57 am #

      Hi Tom, sorry to hear that. I have not seen this problem before.

      Have you searched google? I can see a few posts and it might be related to your version of scipy or similar.

      Let me know how you go.

  25. shudhan September 21, 2016 at 5:54 pm #

    Hey Jason,

    Can you please make a tutorial on how to add additional train data into the already trained model? This will be helpful for the bigger data sets. I read that warm start is used for random forest. But not sure how to implement as algorithm. A generalised version of how to implement would be good. Thank You!

    • Jason Brownlee September 22, 2016 at 8:08 am #

      Great question Shudhan!

      Yes, you could save your weights, load them later into a new network topology and start training on new data again.

      I’ll work out an example in coming weeks, time permitting.

  26. Joanna September 22, 2016 at 1:09 am #

    Hi Jason,
    first of all congratulations for this amazing work that you have done!
    Here is my question:
    What about if my .csv file includes also both nominal and numerical attributes?
    Should I change my nominal values to numerical?

    Thank you in advance

  27. ATM October 2, 2016 at 5:47 am #

    A small bug:-
    Line 25 : rounded = [round(x) for x in predictions]

    should have numpy.round instead, for the code to run!
    Great tutorial, regardless. The best i’ve seen for intro to ANN in python. Thanks!

    • Jason Brownlee October 2, 2016 at 8:20 am #

      Perhaps it’s your version of Python or environment?

      In Python 2.7 the round() function is built-in.

      • AC January 14, 2017 at 2:11 am #

        If there is comment for python3, should be better.
        #use unmpy.round instead, if using python3,

  28. Ash October 9, 2016 at 1:36 am #

    This is simple to grasp! Great post! How can we perform dropout in keras?

  29. Homagni Saha October 14, 2016 at 4:15 am #

    Hello Jason,
    You are using model.predict in the end to predict the results. Is it possible to save the model somewhere in the harddisk and transfer it to another machine(turtlebot running on ROS for my instance) and then use the model directly on turtlebot to predict the results?
    Please tell me how
    Thanking you
    Homagni Saha

  30. Rimi October 16, 2016 at 8:21 pm #

    Hi Jason,
    I implemented you code to begin with. But I am getting an accuracy of 45.18% with the same parameters and everything.
    Cant figure out why.
    Thanks

    • Jason Brownlee October 17, 2016 at 10:29 am #

      There does sound like a problem there Rimi.

      Confirm the code and data match exactly.

  31. Ankit October 26, 2016 at 8:12 pm #

    Hi Jason,
    I am little confused with first layer parameters. You said that first layer has 12 neurons and expects 8 input variables.

    Why there is a difference between number of neurons, input_dim for first layer.

    Regards,
    Ankit

    • Jason Brownlee October 27, 2016 at 7:45 am #

      Hi Ankit,

      The problem has 8 input variables and the first hidden layer has 12 neurons. Inputs are the columns of data, these are fixed. The Hidden layers in general are whatever we design based on whatever capacity we think we need to represent the complexity of the problem. In this case, we have chosen 12 neurons for the first hidden layer.

      I hope that is clearer.

  32. Tom October 27, 2016 at 3:04 am #

    Hi,
    I have a data , IRIS like data but with more colmuns.
    I want to use MLP and DBN/CNNClassifier (or any other Deep Learning classificaiton algorithm) on my data to see how correctly it does classified into 6 groups.

    Previously using DEEP LEARNING FOR J, today first time see KERAS.
    does KERAS has examples (code examples) of DL Classification algorithms?

    Kindly,
    Tom

    • Jason Brownlee October 27, 2016 at 7:48 am #

      Yes Tom, the example in this post is an example of a neural network (deep learning) applied to a classification problem.

  33. Rumesa October 30, 2016 at 1:57 am #

    I have installed theano but it gives me the error of tensorflow.is it mendatory to install both packages? because tensorflow is not supported on wndows.the only way to get it on windows is to install virtual machine

    • Jason Brownlee October 30, 2016 at 8:57 am #

      Keras will work just fine with Theano.

      Just install Theano, and configure Keras to use the Theano backend.

      More information about configuring the Keras backend here:
      http://machinelearningmastery.com/introduction-python-deep-learning-library-keras/

      • Rumesa October 31, 2016 at 4:36 am #

        hey jason I have run your code but got the following error.Although I have aready installed theano backend.help me out.I just stuck.

        Using TensorFlow backend.
        Traceback (most recent call last):
        File “C:\Users\pc\Desktop\first.py”, line 2, in
        from keras.models import Sequential
        File “C:\Users\pc\Anaconda3\lib\site-packages\keras\__init__.py”, line 2, in
        from . import backend
        File “C:\Users\pc\Anaconda3\lib\site-packages\keras\backend\__init__.py”, line 64, in
        from .tensorflow_backend import *
        File “C:\Users\pc\Anaconda3\lib\site-packages\keras\backend\tensorflow_backend.py”, line 1, in
        import tensorflow as tf
        ImportError: No module named ‘tensorflow’
        >>>

        • Jason Brownlee October 31, 2016 at 5:34 am #

          Change the backend used by Keras from TensorFlow to Theano.

          You can do this either by using the command line switch or changing the Keras config file.

          See the link I posted in the previous post for instructions.

    • Maria January 6, 2017 at 1:05 pm #

      Hello Rumesa!
      Have you solved your problem? I have the same one. Everywhere is the same answer with keras.json file or envirinment variable but it doesn’t work. Can you tell me what have worked for you?

      • Jason Brownlee January 7, 2017 at 8:20 am #

        Interesting.

        Maybe there is an issue with the latest version and a tight coupling to tensorflow? I have not seen this myself.

        Perhaps it might be worth testing prior versions of Keras, such as 1.1.0?

        Try this:

  34. Alexon November 1, 2016 at 6:54 am #

    Hi Jason,

    First off, thanks so much for creating these resources, I have been keeping an eye on your newsletter for a while now, and I finally have the free time to start learning more about it myself, so your work has been really appreciated.

    My question is: How can I set/get the weights of each hidden node?

    I am planning to create several arrays randomized weights, then use a genetic algorithm to see which weight array performs the best and improve over generations. How would be the best way to go about this, and if I use a “relu” activation function, am I right in thinking these randomly generated weights should be between 0 and 0.05?

    Many thanks for your help 🙂
    Alexon

    • Jason Brownlee November 1, 2016 at 8:05 am #

      Thanks Alexon,

      You can get and set the weights from a network.

      You can learn more about how to do this in the context of saving the weights to file here:
      http://machinelearningmastery.com/save-load-keras-deep-learning-models/

      I hope that helps as a start, I’d love to hear how you go.

      • Alexon November 6, 2016 at 6:36 am #

        Thats great, thanks for pointing me in the right direction.
        I’d be happy to let you know how it goes, but might take a while as this is very much a “when I can find the time” project between jobs 🙂

        Cheers!

  35. Arnaldo Gunzi November 2, 2016 at 10:17 pm #

    Nice introduction, thanks!

  36. Abbey November 14, 2016 at 11:05 pm #

    Good day

    I have a question, how can I represent a character as a vector that could be an input for the neural network to predict the word meaning and trained using LSTM

    For instance, I have bf to predict boy friend or best friend and similarly I have 2mor to predict tomorrow. I need to encode all the input as a character represented as vector, so that it can be train with RNN/LSTM to predict the output.

    Thank you.

    Kind Regards

    • Jason Brownlee November 15, 2016 at 7:54 am #

      Hi Abbey, You can map characters to integers to get integer vectors.

      • Abbey November 15, 2016 at 6:17 pm #

        Thank you Jason, if i map characters to integers value to get vectors using English Alphabets, numbers and special characters

        The question is how will LSTM predict the character. Please example in more details for me.

        Regards

        • Jason Brownlee November 16, 2016 at 9:27 am #

          Hi Abbey,

          If your output values are also characters, you can map them onto integers, and reverse the mapping to convert the predictions back to text.

          • Abbey November 16, 2016 at 8:39 pm #

            The output value of the characters encoding will be text

      • Abbey November 15, 2016 at 6:22 pm #

        Thank you, Jason, if I map characters to integers value to get vectors representation of the informal text using English Alphabets, numbers and special characters

        The question is how will LSTM predict the character or words that have close meaning to the input value. Please example in more details for me. I understand how RNN/LSTM work based on your tutorial example but the logic in designing processing is what I am stress with.

        Regards

  37. Ammar November 27, 2016 at 10:35 am #

    hi Jason,
    i am trying to implement CNN one dimention on my data. so, i bluit my network.
    the issue is:
    def train_model(model, X_train, y_train, X_test, y_test):
    X_train = X_train.reshape(-1, 1, 41)
    X_test = X_test.reshape(-1, 1, 41)

    numpy.random.seed(seed)
    model.fit(X_train, y_train, validation_data=(X_test, y_test), nb_epoch=100, batch_size=64)
    # Final evaluation of the model
    scores = model.evaluate(X_test, y_test, verbose=0)
    print(“Accuracy: %.2f%%” % (scores[1] * 100))
    this method above does not work and does not give me any error message.
    could you help me with this please?

    • Jason Brownlee November 28, 2016 at 8:40 am #

      Hi Ammar, I’m surprised that there is no error message.

      Perhaps run from the command line and add some print() statements to see exactly where it stops.

  38. KK November 28, 2016 at 6:55 pm #

    Hi Jason
    Great work. I have another doubt. How can we apply this to text mining. I have a csv file containing review document and label. I want to apply classify the documents based on the text available. Can U do this favor.

    • Jason Brownlee November 29, 2016 at 8:48 am #

      I would recommend converting the chars to ints and then using an Embedding layer.

  39. Alex M November 30, 2016 at 10:52 pm #

    Mr Jason, this is great tutorial but I am stack with some errors.

    First I can’t load data set correctly, tried to correct error but can’t make it. ( FileNotFoundError: [Errno 2] No such file or directory: ‘pima-indians-diabetes.csv’ ).

    Second: While trying to evaluate the model it says (X is not defined) May be this is because uploading failed.

    Thanks!

    • Jason Brownlee December 1, 2016 at 7:29 am #

      You need to download the file and place it in your current working directory Alex.

      Does that help?

  40. Alex M December 1, 2016 at 6:45 pm #

    Sir, it is now successful….
    Thanks!

  41. Bappaditya December 2, 2016 at 7:35 pm #

    Hi Jason,

    First of all a special thanks to you for providing such a great tutorial. I am very new to machine learning and truly speaking i had no background in data science. The concept of ML overwhelmed me and now i have a desire to be an expert of this field. I need your advice to start from a scratch. Also i am a PhD student in Computer Engineering ( computer hardware )and i want to apply it as a tool for fault detection and testing for ICs.Can you provide me some references on this field?

  42. Alex M December 3, 2016 at 8:00 pm #

    Well as usual in our daily coding life errors happen, now I have this error how can I correct it? Thanks!

    ” —————————————————————————
    NoBackendError Traceback (most recent call last)
    in ()
    16 import librosa.display
    17 audio_path = (‘/Users/MA/Python Notebook/OK.mp3’)
    —> 18 y, sr = librosa.load(audio_path)

    C:\Users\MA\Anaconda3\lib\site-packages\librosa\core\audio.py in load(path, sr, mono, offset, duration, dtype)
    107
    108 y = []
    –> 109 with audioread.audio_open(os.path.realpath(path)) as input_file:
    110 sr_native = input_file.samplerate
    111 n_channels = input_file.channels

    C:\Users\MA\Anaconda3\lib\site-packages\audioread\__init__.py in audio_open(path)
    112
    113 # All backends failed!
    –> 114 raise NoBackendError()

    NoBackendError:

    That is the error I am getting just when trying to load a song into librosa…
    Thanks!! @Jason Brownlee

    • Jason Brownlee December 4, 2016 at 5:30 am #

      Sorry, this looks like an issue with your librosa library, not a machine learning issue. I can’t give you expert advice, sorry.

  43. Alex M December 4, 2016 at 10:30 pm #

    Thanks I have managed to correct the error…

    Happy Sunday to you all……

  44. Lei December 4, 2016 at 10:52 pm #

    Hi, Jason, thank you for your amazing examples.
    I run the same code on my laptop. But I did not get the same results. What could be the possible reasons?
    I am using windows 8.1 64bit+eclipse+anaconda 4.2+theano 0.9.4+CUDA7.5
    I got results like follows.

    … …
    Epoch 145/150

    10/768 […………………………] – ETA: 0s – loss: 0.3634 – acc: 0.8000
    80/768 [==>………………………] – ETA: 0s – loss: 0.4066 – acc: 0.7750
    150/768 [====>…………………….] – ETA: 0s – loss: 0.4059 – acc: 0.8067
    220/768 [=======>………………….] – ETA: 0s – loss: 0.4047 – acc: 0.8091
    300/768 [==========>……………….] – ETA: 0s – loss: 0.4498 – acc: 0.7867
    380/768 [=============>…………….] – ETA: 0s – loss: 0.4595 – acc: 0.7895
    450/768 [================>………….] – ETA: 0s – loss: 0.4568 – acc: 0.7911
    510/768 [==================>………..] – ETA: 0s – loss: 0.4553 – acc: 0.7882
    580/768 [=====================>……..] – ETA: 0s – loss: 0.4677 – acc: 0.7776
    660/768 [========================>…..] – ETA: 0s – loss: 0.4697 – acc: 0.7788
    740/768 [===========================>..] – ETA: 0s – loss: 0.4611 – acc: 0.7838
    768/768 [==============================] – 0s – loss: 0.4614 – acc: 0.7799
    Epoch 146/150

    10/768 […………………………] – ETA: 0s – loss: 0.3846 – acc: 0.8000
    90/768 [==>………………………] – ETA: 0s – loss: 0.5079 – acc: 0.7444
    170/768 [=====>……………………] – ETA: 0s – loss: 0.4500 – acc: 0.7882
    250/768 [========>…………………] – ETA: 0s – loss: 0.4594 – acc: 0.7840
    330/768 [===========>………………] – ETA: 0s – loss: 0.4574 – acc: 0.7818
    400/768 [==============>……………] – ETA: 0s – loss: 0.4563 – acc: 0.7775
    470/768 [=================>…………] – ETA: 0s – loss: 0.4654 – acc: 0.7723
    540/768 [====================>………] – ETA: 0s – loss: 0.4537 – acc: 0.7870
    620/768 [=======================>……] – ETA: 0s – loss: 0.4615 – acc: 0.7806
    690/768 [=========================>….] – ETA: 0s – loss: 0.4631 – acc: 0.7739
    750/768 [============================>.] – ETA: 0s – loss: 0.4649 – acc: 0.7733
    768/768 [==============================] – 0s – loss: 0.4636 – acc: 0.7734
    Epoch 147/150

    10/768 […………………………] – ETA: 0s – loss: 0.3561 – acc: 0.9000
    90/768 [==>………………………] – ETA: 0s – loss: 0.4167 – acc: 0.8556
    170/768 [=====>……………………] – ETA: 0s – loss: 0.4824 – acc: 0.8059
    250/768 [========>…………………] – ETA: 0s – loss: 0.4534 – acc: 0.8080
    330/768 [===========>………………] – ETA: 0s – loss: 0.4679 – acc: 0.7848
    400/768 [==============>……………] – ETA: 0s – loss: 0.4590 – acc: 0.7950
    460/768 [================>………….] – ETA: 0s – loss: 0.4619 – acc: 0.7913
    530/768 [===================>……….] – ETA: 0s – loss: 0.4562 – acc: 0.7868
    600/768 [======================>…….] – ETA: 0s – loss: 0.4497 – acc: 0.7883
    680/768 [=========================>….] – ETA: 0s – loss: 0.4525 – acc: 0.7853
    760/768 [============================>.] – ETA: 0s – loss: 0.4568 – acc: 0.7803
    768/768 [==============================] – 0s – loss: 0.4561 – acc: 0.7812
    Epoch 148/150

    10/768 […………………………] – ETA: 0s – loss: 0.4183 – acc: 0.9000
    80/768 [==>………………………] – ETA: 0s – loss: 0.3674 – acc: 0.8750
    160/768 [=====>……………………] – ETA: 0s – loss: 0.4340 – acc: 0.8250
    240/768 [========>…………………] – ETA: 0s – loss: 0.4799 – acc: 0.7583
    320/768 [===========>………………] – ETA: 0s – loss: 0.4648 – acc: 0.7719
    400/768 [==============>……………] – ETA: 0s – loss: 0.4596 – acc: 0.7775
    470/768 [=================>…………] – ETA: 0s – loss: 0.4475 – acc: 0.7809
    540/768 [====================>………] – ETA: 0s – loss: 0.4545 – acc: 0.7778
    620/768 [=======================>……] – ETA: 0s – loss: 0.4590 – acc: 0.7742
    690/768 [=========================>….] – ETA: 0s – loss: 0.4769 – acc: 0.7652
    760/768 [============================>.] – ETA: 0s – loss: 0.4748 – acc: 0.7658
    768/768 [==============================] – 0s – loss: 0.4734 – acc: 0.7669
    Epoch 149/150

    10/768 […………………………] – ETA: 0s – loss: 0.3043 – acc: 0.9000
    90/768 [==>………………………] – ETA: 0s – loss: 0.4913 – acc: 0.7111
    170/768 [=====>……………………] – ETA: 0s – loss: 0.4779 – acc: 0.7588
    250/768 [========>…………………] – ETA: 0s – loss: 0.4794 – acc: 0.7640
    320/768 [===========>………………] – ETA: 0s – loss: 0.4957 – acc: 0.7562
    370/768 [=============>…………….] – ETA: 0s – loss: 0.4891 – acc: 0.7703
    450/768 [================>………….] – ETA: 0s – loss: 0.4737 – acc: 0.7867
    520/768 [===================>……….] – ETA: 0s – loss: 0.4675 – acc: 0.7865
    600/768 [======================>…….] – ETA: 0s – loss: 0.4668 – acc: 0.7833
    680/768 [=========================>….] – ETA: 0s – loss: 0.4677 – acc: 0.7809
    760/768 [============================>.] – ETA: 0s – loss: 0.4648 – acc: 0.7803
    768/768 [==============================] – 0s – loss: 0.4625 – acc: 0.7826
    Epoch 150/150

    10/768 […………………………] – ETA: 0s – loss: 0.2751 – acc: 1.0000
    100/768 [==>………………………] – ETA: 0s – loss: 0.4501 – acc: 0.8100
    170/768 [=====>……………………] – ETA: 0s – loss: 0.4588 – acc: 0.8059
    250/768 [========>…………………] – ETA: 0s – loss: 0.4299 – acc: 0.8200
    310/768 [===========>………………] – ETA: 0s – loss: 0.4298 – acc: 0.8129
    380/768 [=============>…………….] – ETA: 0s – loss: 0.4365 – acc: 0.8053
    460/768 [================>………….] – ETA: 0s – loss: 0.4469 – acc: 0.7957
    540/768 [====================>………] – ETA: 0s – loss: 0.4436 – acc: 0.8000
    620/768 [=======================>……] – ETA: 0s – loss: 0.4570 – acc: 0.7871
    690/768 [=========================>….] – ETA: 0s – loss: 0.4664 – acc: 0.7783
    760/768 [============================>.] – ETA: 0s – loss: 0.4617 – acc: 0.7789
    768/768 [==============================] – 0s – loss: 0.4638 – acc: 0.7773

    32/768 [>………………………..] – ETA: 0s
    448/768 [================>………….] – ETA: 0sacc: 79.69%

  45. Nanya December 10, 2016 at 2:55 pm #

    Hello Jason Brownlee,Thx for sharing~
    I’m new in deep learning.And I am wondering can what you dicussed here:”Keras” be used to build a CNN in tensorflow and train some csv fiels for classification.May be this is a stupid question,but waiting for you reply.I’m working on my graduation project for Word sense disambiguation with cnn,and just can’t move on.Hope for your heip~Bese wishes!

    • Jason Brownlee December 11, 2016 at 5:22 am #

      Sorry Nanya, I’m not sure I understand your question. Are you able to rephrase it?

  46. Anon December 16, 2016 at 12:51 am #

    I’ve just installed Anaconda with Keras and am using python 3.5.
    It seems there’s an error with the rounding using Py3 as opposed to Py2. I think it’s because of this change: https://github.com/numpy/numpy/issues/5700

    I removed the rounding and just used print(predictions) and it seemed to work outputting floats instead.

    Does this look correct?


    Epoch 150/150
    0s – loss: 0.4593 – acc: 0.7839
    [[ 0.79361773]
    [ 0.10443526]
    [ 0.90862554]
    …,
    [ 0.33652252]
    [ 0.63745886]
    [ 0.11704451]]

  47. Florin Claudiu Mihalache December 19, 2016 at 2:37 am #

    Hi Jason Brownlee
    I tried to modified your exemple for my problem (Letter Recognition ,http://archive.ics.uci.edu/ml/datasets/Letter+Recognition).
    My data set look like http://archive.ics.uci.edu/ml/machine-learning-databases/letter-recognition/letter-recognition.data (T,2,8,3,5,1,8,13,0,6,6,10,8,0,8,0,8) .I try to split the data in input and ouput like this :

    X = dataset[:,1:17]
    Y = dataset[:,0]
    but a have some error (something related that strings are not recognized) .
    I tried to modified each letter whit the ASCII code (A became 65 and so on).The string error disappeared.
    The program compiles now but the output look like this :

    17445/20000 [=========================>….] – ETA: 0s – loss: -1219.4768 – acc:0.0000e+00
    17605/20000 [=========================>….] – ETA: 0s – loss: -1219.4706 – acc:0.0000e+00
    17730/20000 [=========================>….] – ETA: 0s – loss: -1219.4566 – acc:0.0000e+00
    17890/20000 [=========================>….] – ETA: 0s – loss: -1219.4071 – acc:0.0000e+00
    18050/20000 [==========================>…] – ETA: 0s – loss: -1219.4599 – acc:0.0000e+00
    18175/20000 [==========================>…] – ETA: 0s – loss: -1219.3972 – acc:0.0000e+00
    18335/20000 [==========================>…] – ETA: 0s – loss: -1219.4642 – acc:0.0000e+00
    18495/20000 [==========================>…] – ETA: 0s – loss: -1219.5032 – acc:0.0000e+00
    18620/20000 [==========================>…] – ETA: 0s – loss: -1219.4391 – acc:0.0000e+00
    18780/20000 [===========================>..] – ETA: 0s – loss: -1219.5652 – acc:0.0000e+00
    18940/20000 [===========================>..] – ETA: 0s – loss: -1219.5520 – acc:0.0000e+00
    19080/20000 [===========================>..] – ETA: 0s – loss: -1219.5381 – acc:0.0000e+00
    19225/20000 [===========================>..] – ETA: 0s – loss: -1219.5182 – acc:0.0000e+00
    19385/20000 [============================>.] – ETA: 0s – loss: -1219.6742 – acc:0.0000e+00
    19535/20000 [============================>.] – ETA: 0s – loss: -1219.7030 – acc:0.0000e+00
    19670/20000 [============================>.] – ETA: 0s – loss: -1219.7634 – acc:0.0000e+00
    19830/20000 [============================>.] – ETA: 0s – loss: -1219.8336 – acc:0.0000e+00
    19990/20000 [============================>.] – ETA: 0s – loss: -1219.8532 – acc:0.0000e+00
    20000/20000 [==============================] – 1s – loss: -1219.8594 – acc: 0.0000e+00
    18880/20000 [===========================>..] – ETA: 0sacc: 0.00%

    I do not understand why. Can you please help me

    • Anon December 26, 2016 at 6:44 am #

      What version of Python are you running?

  48. karishma sharma December 22, 2016 at 10:03 am #

    Hi Jason,

    Since the epoch is set to 150 and batch size is 10, does the training algorithm pick 10 training examples at random in each iteration, given that we had only 768 total in X. Or does it sample randomly after it has finished covering all.

    Thanks

    • Jason Brownlee December 23, 2016 at 5:27 am #

      Good question,

      It iterates over the dataset 150 times and within one epoch it works through 10 rows at a time before doing an update to the weights. The patterns are shuffled before each epoch.

      I hope that helps.

  49. Kaustuv January 9, 2017 at 4:57 am #

    Hi Jason
    Thanks a lot for this blog. It really helps me to start learning deep learning which was in a planning state for last few months. Your simple enrich blogs are awsome. No questions from my side before completing all tutorials.
    One question regarding availability of your book. How can I buy those books from India ?

  50. Stephen Wilson January 15, 2017 at 4:00 pm #

    Hi Jason, firstly your work here is a fantastic resource and I am very thankful for the effort you put in.
    I am a slightly-better-than-beginner at python and an absolute novice at ML, I wonder if you could help me classify my problem and find an angle to work at it from.

    My data is thus:
    Column Names: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, Result
    Values: 4, 4, 6, 6, 3, 2, 5, 5, 0, 0, 0, 0, 0, 0, 0, 4

    I want to find the percentage chance of each Column Names category being the Result based off the configuration of all the values present from 1-15. Then if need be compare the configuration of Values with another row of values to find the same, Resulting in the total needed calculation as:

    Column Names: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, Result
    Values: 4, 4, 6, 6, 3, 2, 5, 5, 0, 0, 0, 0, 0, 0, 0, 4
    Values2: 7, 3, 5, 1, 4, 8, 6, 2, 9, 9, 9, 9, 9, 9, 9

    I apologize if my explanation is not clear, and appreciate any help you can give me thank you.

  51. Rohit January 16, 2017 at 10:37 pm #

    Thanks Jason for such a nice and concise example.

    Just wanted to ask if it is possible to save this model in a file and port it to may be an Android or iOS device? If so, what are the libraries available for the same?

    Thanks

    Rohit

  52. Hsiang January 18, 2017 at 3:35 pm #

    Hi, Jason

    Thank you for your blog! It is wonderful!

    I used tensorflow as backend, and implemented the procedures using Jupyter.
    I did “source activate tensorflow” -> “ipython notebook”.
    I can successfully use Keras and import tensorflow.

    However, it seems that such environment doesn’t support pandas and sklearn.
    Do you have any way to incorporate pandas, sklearn and keras?
    (I wish to use sklearn to revisit the classification problem and compare the accuracy with the deep learning method. But I also wish to put the works together in the same interface.)

    Thanks!

    • Jason Brownlee January 19, 2017 at 7:24 am #

      Sorry, I do not use notebooks myself. I cannot offer you good advice.

      • Hsiang January 19, 2017 at 12:53 pm #

        Thanks, Jason!
        Actually the problem is not on notebooks. Even I used the terminal mode, i.e. doing “source activate tensorflow” only. It failed to import sklearn. Does that mean tensorflow library is not compatible with sklearn? Thanks again!

        • Jason Brownlee January 20, 2017 at 10:17 am #

          Sorry Hsiang, I don’t have experience using sklearn and tensorflow with virtual environments.

          • Hsiang January 21, 2017 at 12:46 am #

            Thank you!

          • Jason Brownlee January 21, 2017 at 10:34 am #

            You’re welcome Hsiang.

  53. keshav bansal January 24, 2017 at 12:45 am #

    hello sir,
    A very informative post indeed . I know my question is a very trivial one but can you please show me how to predict on a explicitly mentioned data tuple say v=[6,148,72,35,0,33.6,0.627,50]
    thanks for the tutorial anyway

    • Jason Brownlee January 24, 2017 at 11:04 am #

      Hi keshav,

      You can make predictions by calling model.predict()

  54. CATRINA WEBB January 25, 2017 at 9:06 am #

    When I rerun the file (without predictions) does it reset the model and weights?

  55. Ericson January 30, 2017 at 8:04 pm #

    excuse me sir, i wanna ask you a question about this paragraph”dataset = numpy.loadtxt(“pima-indians-diabetes.csv”,delimiter=’,’)”, i used the mac and downloaded the dataset,then i exchanged the text into csv file. Running the program

    ,hen i got:{Python 2.7.13 (v2.7.13:a06454b1afa1, Dec 17 2016, 12:39:47)
    [GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwin
    Type “copyright”, “credits” or “license()” for more information.
    >>>
    ============ RESTART: /Users/luowenbin/Documents/database_test.py ============
    Using TensorFlow backend.

    Traceback (most recent call last):
    File “/Users/luowenbin/Documents/database_test.py”, line 9, in
    dataset = numpy.loadtxt(“pima-indians-diabetes.csv”,delimiter=’,’)
    File “/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/lib/npyio.py”, line 985, in loadtxt
    items = [conv(val) for (conv, val) in zip(converters, vals)]
    File “/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/lib/npyio.py”, line 687, in floatconv
    return float(x)
    ValueError: could not convert string to float: book
    >>> }
    How can i solve this problem? give me a hand thank you!

    • Jason Brownlee February 1, 2017 at 10:22 am #

      Hi Ericson,

      Confirm that the contents of “pima-indians-diabetes.csv” meet your expectation of a list of CSV lines.

  56. Sukhpal February 7, 2017 at 9:00 pm #

    excuse me sir,when i run this code for my data set ,I encounter this problem…please help me finding solution to this problem
    runfile(‘C:/Users/sukhpal/.spyder/temp.py’, wdir=’C:/Users/sukhpal/.spyder’)
    Using TensorFlow backend.
    Traceback (most recent call last):

    File “”, line 1, in
    runfile(‘C:/Users/sukhpal/.spyder/temp.py’, wdir=’C:/Users/sukhpal/.spyder’)

    File “C:\Users\sukhpal\Anaconda2\lib\site-packages\spyder\utils\site\sitecustomize.py”, line 866, in runfile
    execfile(filename, namespace)

    File “C:\Users\sukhpal\Anaconda2\lib\site-packages\spyder\utils\site\sitecustomize.py”, line 87, in execfile
    exec(compile(scripttext, filename, ‘exec’), glob, loc)

    File “C:/Users/sukhpal/.spyder/temp.py”, line 1, in
    from keras.models import Sequential

    File “C:\Users\sukhpal\Anaconda2\lib\site-packages\keras\__init__.py”, line 2, in
    from . import backend

    File “C:\Users\sukhpal\Anaconda2\lib\site-packages\keras\backend\__init__.py”, line 67, in
    from .tensorflow_backend import *

    File “C:\Users\sukhpal\Anaconda2\lib\site-packages\keras\backend\tensorflow_backend.py”, line 1, in
    import tensorflow as tf

    ImportError: No module named tensorflow

    • Jason Brownlee February 8, 2017 at 9:34 am #

      This is a change with the most recent version of tensorflow, I will investigate and change the example.

      For now, consider installing and using an older version of tensorflow.

  57. Will February 14, 2017 at 5:33 am #

    Great tutorial! Amazing amount of work you’ve put in and great marketing skills (I also have an email list, ebooks and sequence, etc). I ran this in Jupyter notebook… I noticed the 144th epoch (acc .7982) had more accuracy than at 150. Why is that?

    P.S. i did this for the print: print(numpy.round(predictions))
    It seems to avoid a list of arrays which when printing includes the dtype (messy)

  58. Sukhpal February 14, 2017 at 3:50 pm #

    Please help me to find out this error
    runfile(‘C:/Users/sukhpal/.spyder/temp.py’, wdir=’C:/Users/sukhpal/.spyder’)ERROR: execution aborted

    • Jason Brownlee February 15, 2017 at 11:32 am #

      I’m not sure Sukhpal.

      Consider getting code working from the command line, I don’t use IDEs myself.

  59. Kamal February 14, 2017 at 5:15 pm #

    please help me to find this error find this error
    Epoch 194/195
    195/195 [==============================] – 0s – loss: 0.2692 – acc: 0.8667
    Epoch 195/195
    195/195 [==============================] – 0s – loss: 0.2586 – acc: 0.8667
    195/195 [==============================] – 0s
    Traceback (most recent call last):

  60. Kamal February 15, 2017 at 3:24 pm #

    sir when i run the code on my data set
    then it doesnot show overall accuracy although it shows the accuracy and loss for the whole iterations

    • Jason Brownlee February 16, 2017 at 11:06 am #

      I’m not sure I understand your question Kamal, please you could restate it?

  61. Val February 15, 2017 at 9:00 pm #

    Hi Jason, im just starting deep learning in python using keras and theano. I have followed the installation instructions without a hitch. Tested some examples but when i run this one line by line i get a lot of exceptions and errors once i run the “model.fit(X,Y, nb_epochs=150, batch_size=10”

  62. CrisH February 17, 2017 at 8:12 pm #

    Hi, how do I know what number to use for random.seed() ? I mean you use 7, is there any reason for that? Also is it enough to use it only once, in the beginning of the code?

  63. kk February 18, 2017 at 1:53 am #

    am new to deep learning and found this great tutorial. keep it up and look forward!!

  64. Iqra Ameer February 21, 2017 at 5:20 am #

    HI, I have a problem in execution the above example as it. It seems that it’s not running properly and stops at Using TensorFlow backend.

    Epoch 147/150
    768/768 [==============================] – 0s – loss: 0.4709 – acc: 0.7878
    Epoch 148/150
    768/768 [==============================] – 0s – loss: 0.4690 – acc: 0.7812
    Epoch 149/150
    768/768 [==============================] – 0s – loss: 0.4711 – acc: 0.7721
    Epoch 150/150
    768/768 [==============================] – 0s – loss: 0.4731 – acc: 0.7747
    32/768 [>………………………..] – ETA: 0sacc: 76.43%

    I am new in this field, could you please guide me about this error.
    I also executed on another data set, it stops with the same behavior.

    • Jason Brownlee February 21, 2017 at 9:39 am #

      What is the error exactly? The example hangs?

      Maybe try the Theano backend and see if that makes a difference. Also make sure all of your libraries are up to date.

  65. Iqra Ameer February 22, 2017 at 5:47 am #

    Dear Jason,
    Thank you so much for your valuable suggestions. I tried Theano backend and also updated all my libraries, but again it hanged at:

    768/768 [==============================] – 0s – loss: 0.4656 – acc: 0.7799
    Epoch 149/150
    768/768 [==============================] – 0s – loss: 0.4589 – acc: 0.7826
    Epoch 150/150
    768/768 [==============================] – 0s – loss: 0.4611 – acc: 0.7773
    32/768 [>………………………..] – ETA: 0sacc: 78.91%

    • Jason Brownlee February 22, 2017 at 10:05 am #

      I’m sorry to hear that, I have not seen this issue before.

      Perhaps a RAM issue or a CPU overheating issue? Are you able to try different hardware?

  66. Bhanu February 23, 2017 at 1:51 pm #

    Hello sir,
    i want to ask wether we can convert this code to deep learning wid increasing number of layers..

    • Jason Brownlee February 24, 2017 at 10:12 am #

      Sure you can increase the number of layers, try it and see.

Leave a Reply