Display Deep Learning Model Training History in Keras

You can learn a lot about neural networks and deep learning models by observing their performance over time during training.

Keras is a powerful library in Python that provides a clean interface for creating deep learning models and wraps the more technical TensorFlow and Theano backends.

In this post you will discover how you can review and visualize the performance of deep learning models over time during training in Python with Keras.

Let’s get started.

  • Update Mar/2017: Updated example for Keras 2.0.2, TensorFlow 1.0.1 and Theano 0.9.0.
  • Update March/2018: Added alternate link to download the dataset as the original appears to have been taken down.
Display Deep Learning Model Training History in Keras

Display Deep Learning Model Training History in Keras
Photo by Gordon Robertson, some rights reserved.

Access Model Training History in Keras

Keras provides the capability to register callbacks when training a deep learning model.

One of the default callbacks that is registered when training all deep learning models is the History callback. It records training metrics for each epoch. This includes the loss and the accuracy (for classification problems) as well as the loss and accuracy for the validation dataset, if one is set.

The history object is returned from calls to the fit() function used to train the model. Metrics are stored in a dictionary in the history member of the object returned.

For example, you can list the metrics collected in a history object using the following snippet of code after a model is trained:

For example, for a model trained on a classification problem with a validation dataset, this might produce the following listing:

We can use the data collected in the history object to create plots.

The plots can provide an indication of useful things about the training of the model, such as:

  • It’s speed of convergence over epochs (slope).
  • Whether the model may have already converged (plateau of the line).
  • Whether the mode may be over-learning the training data (inflection for validation line).

And more.

Need help with Deep Learning in Python?

Take my free 2-week email course and discover MLPs, CNNs and LSTMs (with code).

Click to sign-up now and also get a free PDF Ebook version of the course.

Start Your FREE Mini-Course Now!

Visualize Model Training History in Keras

We can create plots from the collected history data.

In the example below we create a small network to model the Pima Indians onset of diabetes binary classification problem. This is a small dataset available from the UCI Machine Learning Repository. You can download the dataset and save it as pima-indians-diabetes.csv in your current working directory (update: download from here).

The example collects the history, returned from training the model and creates two charts:

  1. A plot of accuracy on the training and validation datasets over training epochs.
  2. A plot of loss on the training and validation datasets over training epochs.

The plots are provided below. The history for the validation dataset is labeled test by convention as it is indeed a test dataset for the model.

From the plot of accuracy we can see that the model could probably be trained a little more as the trend for accuracy on both datasets is still rising for the last few epochs. We can also see that the model has not yet over-learned the training dataset, showing comparable skill on both datasets.

Plot of Model Accuracy on Train and Validation Datasets

Plot of Model Accuracy on Train and Validation Datasets

From the plot of loss, we can see that the model has comparable performance on both train and validation datasets (labeled test). If these parallel plots start to depart consistently, it might be a sign to stop training at an earlier epoch.

Plot of Model Loss on Training and Validation Datasets

Plot of Model Loss on Training and Validation Datasets

Summary

In this post you discovered the importance of collecting and reviewing metrics during the training of your deep learning models.

You learned about the History callback in Keras and how it is always returned from calls to the fit() function to train your models. You learned how to create plots from the history data collected during training.

Do you have any questions about model training history or about this post? Ask your question in the comments and I will do my best to answer.

Frustrated With Your Progress In Deep Learning?

Deep Learning with Python

 What If You Could Develop A Network in Minutes

…with just a few lines of Python

Discover how in my new Ebook: Deep Learning With Python

It covers self-study tutorials and end-to-end projects on topics like:
Multilayer PerceptronsConvolutional Nets and Recurrent Neural Nets, and more…

Finally Bring Deep Learning To
Your Own Projects

Skip the Academics. Just Results.

Click to learn more.

95 Responses to Display Deep Learning Model Training History in Keras

  1. Marcel August 3, 2016 at 12:12 am #

    Thanks Jason!

    • Jason Brownlee August 3, 2016 at 8:09 am #

      You’re welcome Marcel.

      • Dong February 2, 2018 at 2:29 pm #

        Hi, do you know how to use Callback to plot a picture in keras?

  2. Randy September 12, 2016 at 4:27 am #

    Hi great. Is there also a possiblity to plot accuracy and loss for every sample in each epoch.

    For instance: 1 epoch, 60,000MNIST images => plot 60,000 accuracy/losses in that epoch?

    • Jason Brownlee September 12, 2016 at 8:34 am #

      The plots can do this in aggregate, you can calculate the loss and accuracy on each individual sample if you wish, but that would be a lot of data. I hope I have answered your question.

  3. Alvin September 30, 2016 at 3:38 pm #

    Hi Jason,

    Thanks for your great post!
    For the accuracy graph, what’s the indicator when it starts to get over-learned? What would the graph looks like when it happens?

    Thanks in advance

    • Jason Brownlee October 1, 2016 at 8:01 am #

      Hi Alvin, great question.

      If the model is overfitting the graph will show great performance on the training data and poor performance on the test data.

  4. Suny October 6, 2016 at 3:43 am #

    Jason,
    Great tutorial and very articulate around how each of the network in keras works.
    I had a quick question:
    does keras support this kind of dataset for implementing an autoencoder rather than a FFN ?
    Thanks..

    regards
    Sunny

    • Jason Brownlee October 6, 2016 at 9:39 am #

      Hi Suny,

      Keras does support autoencoders, but I don’t use them generally as they are been surpassed by big MLPs and specialized methods like LSTMs and CNNs that can learn features while training.

  5. Yuanliang Meng November 4, 2016 at 2:26 pm #

    Hello Jason (and all).
    When dropout is applied, I wonder how the loss and acc values are computed. After each epoch, does the program still drop the neurons/weights to compute the loss and accuracy, or use the whole network?

    • Jason Brownlee November 5, 2016 at 7:28 am #

      Great question,

      Dropout is only applied during training (backward pass), not on making predictions (forward pass).

  6. Bo November 10, 2016 at 10:16 am #

    Hi Jason,

    Thanks for all of the great tutorials!
    I’d like to be able to plot the history of a stateful LSTM. I’ve tried something like the below, but in this case it’s failing b/c I’m asking python dicts to do something they don’t like (I’m new to python). I’ve tried some other approaches which have all failed for python-related reasons.

    Reprinting your .fit() code from your stateful tutorial (and adding a failed attempt at capturing history):

    my_history = {}
    for i in range(100):
    history = model.fit(trainX, trainY, nb_epoch=1, batch_size=batch_size, verbose=2, shuffle=False)
    my_history.update(history)
    model.reset_states()

    What am I doing wrong here? Thanks!

    • Jason Brownlee November 11, 2016 at 9:58 am #

      Very interesting idea Bo.

      Consider using a list and appending the history object to the list. Also consider just creating an all new model each iteration to try and keep it all apples to apples comparison.

      Let me know how you go and what you discover!

  7. nagendra somanath December 5, 2016 at 10:52 am #

    How can one display the neural net used in keras ?. Is there a simple way to plot the network

  8. Aviel December 6, 2016 at 6:13 pm #

    Hi Jason,

    I would like to visualize loss and accuracy graphs per each epoch during training.
    I was thinking of doing so by writing a callback but wasn’t sure exactly how and if this can be done.
    What do you suggest?
    Thanks

    • Jason Brownlee December 7, 2016 at 8:55 am #

      Hi Aviel, Keras is not designed to do this natively.

      Maybe use a callback to post to a file/db and use a separate process to plot?

      I would suggest getting something ghetto like that going and see how it looks.

  9. Charlie Parker February 25, 2017 at 5:42 am #

    Hi,

    First, thanks so much for the tutorial!

    I have a quick question. I want to plot the graphs but my computing resources are **not** local. Is there a way to have a callback or something that stored each error value in a CSV file and later plot it? Or is there a way idk to save history object, maybe pickle it and then send to my local computer with some standard tool, like rsync or dropbox?

    What do you recommend for these remote plotting experiments? I just need to get the data somewhere I can plot the error/loss vs epochs.

    (also, can I plot vs iterations instead of epochs? just curious)

    • Jason Brownlee February 25, 2017 at 6:03 am #

      Hi Charlie,

      You can store the history in an array and then create and save the plot as a file, without a display.

  10. pattijane April 1, 2017 at 10:39 pm #

    Hello,

    I have a very simple question and I hope you don’t mind me asking, I want to save loss function figure with plt.savefig(“figure”), but I get module is not callable error, and if I comment out plt.savefig(“figure”) everything works just fine. Do you happen to have any idea why?

    Thanks a lot!

    • Jason Brownlee April 2, 2017 at 6:28 am #

      Ensure you have matplotlib installed and configured correctly.

      • pattijane May 12, 2017 at 5:57 am #

        Hello,

        I solved the error, thanks! I have an another issue however, I’m doing a grid search on parameters (epoch and batch size) and for each combination I plot the loss function. However, for each combination it just keeps displaying each results on top each other in the same figure! Any idea why that might happen?

        • Jason Brownlee May 12, 2017 at 7:50 am #

          Sorry, I don’t have experience capturing history within a grid search.

          I would recommend writing your own for-loops/grid search so you have more flexibility.

  11. Dave April 23, 2017 at 8:39 am #

    Hi,

    Great work.

    Quick question. I am using tensorflow without Keras at the moment, and am plotting the loss and accuracy of a CNN. I am using cross entropy with adam optimizer, and using the cross entropy value as the loss. Is this right?
    Also, if the loss is in the 200-300 range, should I be plotting the log value of this? as all the graphs I see the loss is between 0-1.

    Thanks
    Dave

  12. Caleb Everett April 27, 2017 at 1:50 pm #

    Hello, thank you for all the great information. Can you provide any suggestions on how to access the training history if the Keras model is part of a pipeline?

    Thank you,

    Caleb

    • Jason Brownlee April 28, 2017 at 7:33 am #

      Sorry, I have not done this myself. You may need to extend the sklearn wrapper and capture this information manually.

  13. Anastasios Selalmazidis May 2, 2017 at 1:01 am #

    Hi Jason,

    I am running this example from your book, but I am using cross_validation in particular StratifiedKFold. So when I fit the model I do not pass a validation_split or validation_data hence my model history has only keys [‘acc’, ‘loss’]. I am using model.evaluate(). How can I visualize the test ?

    • Jason Brownlee May 2, 2017 at 6:00 am #

      You could hold back a validation test or you could collect history of each model evaluated on each cross validation fold.

  14. Nir June 4, 2017 at 8:39 pm #

    Hi Jason!
    i have two problems:
    1) when setting verbose to 2, i expect printing during each epoc including progressing bar, but i see only the train and validation loss (without seeing the accuracy or progressing bar)

    2) when the run reaches the part of trying to plot, i receive an error:
    plt.plot(history.history[‘acc’])
    KeyError: ‘acc’
    Exception ignored in: <bound method BaseSession.__del__ of >
    Traceback (most recent call last):
    File “C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\client\session.py”, line 582, in __del__
    UnboundLocalError: local variable ‘status’ referenced before assignment

    thanks !

    • Jason Brownlee June 5, 2017 at 7:41 am #

      That is correct, if you want a progress bar set verbose=1.

      You must add the accuracy metric to the fit function. The error suggests this was not done. Learn more about metrics here:
      https://keras.io/metrics/

      • Nir June 6, 2017 at 2:37 am #

        Hi Jason, thanks a lot, I still have a few more questions:
        a. How can I plot the ROC curve using history object?
        b. How can I save best model after each epoch? (overwrite my model with a new one only if the accuracy over the validation set has improved)

        Thanks,
        Nir

        • Jason Brownlee June 6, 2017 at 10:07 am #

          I do not have an example of plotting the ROC curve with Keras results.

          This post will help you save models during training:
          http://machinelearningmastery.com/check-point-deep-learning-models-keras/

        • Troy March 23, 2018 at 1:28 pm #

          I found solution to generate ROC/AUC here:

          • Troy March 23, 2018 at 1:34 pm #

            Forgot to say many thanks to you Jason, you never cease to amaze, always on the cutting edge but remaining pragmatic.

          • Jason Brownlee March 24, 2018 at 6:18 am #

            I’m glad to hear it.

  15. Mirza Mohtashim Alam June 8, 2017 at 7:40 am #

    Can you please tell how can I keep the history of classifier.fit_generator() function.

  16. Arbish June 13, 2017 at 6:37 am #

    Hi jason!

    i want to access model training history in tflearn to plot graphs.
    How can we do this in tflearn?

  17. Kunal Sarkar July 4, 2017 at 8:14 pm #

    Hi Jeson, I am using more than 100 gb dataset for building a model. where i am using HDF5 data base for data loading.so for this type of configuration I am manually iterate the training process. So as I am using manual itteration, History file is not appending the model information, instade of history file is creating after every epoc. How to update history file as it append in normal process.
    can I manually append the model information after every epoch.as history file information is needed for model optimization.

    • Jason Brownlee July 6, 2017 at 10:15 am #

      I would recommend saving performance as you go to a file. Just append with each epochs scores.

  18. Linlin July 7, 2017 at 8:08 am #

    Hi Jason, I wrote a LSTM model to train my brain MRI slices. For my dataset, each patient has 50 slices, and n patients are divided into training and validation sets . My LSTM model is designed as below:
    model = Sequential()
    model.add(LSTM(128, input_shape = (max_timesteps, num_clusters), activation=’tanh’, recurrent_activation=’elu’, return_sequences = False, stateful = False, name=’lstm_layer’))
    model.add(Dropout(0.5, name = ‘dropout_layer’))
    model.add(Dense(out_category, activation = ‘softmax’, name=’dense_layer’))
    optimizer = optimizers.RMSprop(lr=lrate)
    model.compile(loss = ‘categorical_crossentropy’, optimizer = optimizer, metrics=[‘accuracy’])
    model.fit(X_train, y_train, validation_data=(X_vald, y_vald), epochs = epoch_num, batch_size = batch_size, shuffle = True)

    First, I use the GlobalAveragePooling layer of fine-tuned GoogLeNet to extract the feature of each slice.
    Second, the n1*50*2048 features from training set and n2*50*2048 features from validation set are used to train my LSTM model.
    However, the training process is very wired. The accuracy of training and validation decreases suddenly at Epoch 46. Could you give some advise about this results? The process of Epoch 40 to 50 is attached:
    Epoch 40/70
    407/407 [==============================] – 25s – loss: 8.6558e-05 – acc: 1.0000 – val_loss: 1.3870 – val_acc: 0.8512
    Epoch 41/70
    407/407 [==============================] – 25s – loss: 1.7462e-06 – acc: 1.0000 – val_loss: 1.2368 – val_acc: 0.8595
    Epoch 42/70
    407/407 [==============================] – 25s – loss: 4.5732e-06 – acc: 1.0000 – val_loss: 1.1689 – val_acc: 0.8760
    Epoch 43/70
    407/407 [==============================] – 25s – loss: 6.2214e-07 – acc: 1.0000 – val_loss: 1.2545 – val_acc: 0.8760
    Epoch 44/70
    407/407 [==============================] – 25s – loss: 2.5658e-07 – acc: 1.0000 – val_loss: 1.2440 – val_acc: 0.8595
    Epoch 45/70
    407/407 [==============================] – 25s – loss: 6.2594e-07 – acc: 1.0000 – val_loss: 1.2281 – val_acc: 0.8678
    Epoch 46/70
    407/407 [==============================] – 25s – loss: 3.3054e-07 – acc: 0.5676 – val_loss: 1.1921e-07 – val_acc: 0.5372
    Epoch 47/70
    407/407 [==============================] – 25s – loss: 1.1921e-07 – acc: 0.5061 – val_loss: 1.1921e-07 – val_acc: 0.5372
    Epoch 48/70
    407/407 [==============================] – 25s – loss: 1.1921e-07 – acc: 0.5061 – val_loss: 1.1921e-07 – val_acc: 0.5372
    Epoch 49/70
    407/407 [==============================] – 25s – loss: 1.1921e-07 – acc: 0.5061 – val_loss: 1.1921e-07 – val_acc: 0.5372
    Epoch 50/70
    407/407 [==============================] – 25s – loss: 1.1921e-07 – acc: 0.5061 – val_loss: 1.1921e-07 – val_acc: 0.5372

  19. Jared August 3, 2017 at 12:05 am #

    Hi Professor,

    What’s your experience with Tensorboard callbacks to plot accuracy?

    I’m attempting to use it right now however for some reason it is decreasing my accuracy when I implement it. When I comment the callback out, the accuracy increases by 30%. What’s going on here? Should I just stick to your method instead of using the Tensorboard?

  20. Navid August 17, 2017 at 8:57 pm #

    Hi,
    Thank you,

    How can I have this plots during training? so I can see the network progress online.

    • Jason Brownlee August 18, 2017 at 6:17 am #

      Perhaps you could create a custom callback that dynamically updates a graph.

  21. Dinh August 18, 2017 at 8:21 pm #

    Thanks for your nice tutorial. I have two questions needed you to make it clear:
    1. How can avoid from history object returned by compile function printed.
    2. How can I change tensorflow instead of using theano.

    Thank you so much.

    • Jason Brownlee August 19, 2017 at 6:18 am #

      Sorry, I don’t understand your first question, can you restate it please?

      You can change your backend by editing the Keras configuration file in ~/.keras/keras.json

  22. Ahmed Said Ahmed September 18, 2017 at 4:31 am #

    Hello Dr. Jason , that helped me a lot to visualize my model , but can u tell me how can I choose the validation split value ?? and batch size ??

    • Jason Brownlee September 18, 2017 at 5:48 am #

      Use trial and error on your specific dataset.

      • Ahmed Saeed September 18, 2017 at 10:56 pm #

        Excuse me , what do u mean by the trial and error ?? I am a newbie I the ML and DL S:

        • Jason Brownlee September 19, 2017 at 7:45 am #

          Sorry, I mean use experiments to see what works best on your problem. A primitive type of search process.

  23. John William September 18, 2017 at 10:59 pm #

    What does Val_acc is higher higher than the actual acc of training ??? Does it mean overfiting or what ?

    • Jason Brownlee September 19, 2017 at 7:45 am #

      Off the cuff, it is unusual and it may be a sign of underfitting (e.g. an unstable model).

  24. Raktim September 27, 2017 at 3:21 pm #

    Why you have written “Test” in the graph although you use this as a validation?

  25. iman October 17, 2017 at 11:40 am #

    hi its perfect thnx
    but if i want to save it to *.png file how can i do that?
    i used plt.savefig(‘iman.png’)
    but it doesnt work
    can u help me jason?

    • Jason Brownlee October 17, 2017 at 4:06 pm #

      Yes, that is what I would have recommended.

      Why doesn’t it work?

  26. Astha November 26, 2017 at 6:56 am #

    how to do the same for tflearn, I’ve looked up everywhere, can’t find something similar to this. My model.fit in tflearn doesn’t return anything so I get this error:

    my_history.update(history)
    TypeError: ‘NoneType’ object is not iterable

    It’s be a great help if you can suggest a solution. Thanks!

    • Jason Brownlee November 26, 2017 at 7:35 am #

      Sorry, I do not use tflearn at this stage. I cannot give you good advice.

  27. Abhirami November 29, 2017 at 7:59 pm #

    Hi Jason, Great article!
    I have a question. I am training a CNN over 5 epochs, and getting test accuracy of 0.9995 and plotting the training and validation accuracy graph as you’ve shown. The training accuracy seem to increase from 0 to 0.9995 over the 5 epochs, but the validation accuracy seems almost a constant line at 1.0 (>0.9996). Is this normal? I couldn’t figure out what is happening here.

    (I’m using 100,000 images, of which 20% is used for testing. Of the 80% for training, 20% of that is split for validation and the rest used for training)
    Thanks in advance!

    • Jason Brownlee November 30, 2017 at 8:09 am #

      Interesting, perhaps the sample for validation is too small? Perhaps your model is very effective?

      Perhaps repeat the experiment a few times to see if the outcome holds?

  28. Abhirami Harilal November 30, 2017 at 9:54 pm #

    Yes the outcome holds. It could be that validation sample is quite small. I’m doing training on 64000 images and validating on 16000. So, it could be that or my model is very effective?
    Also, I noticed that training accuracy goes above the validation accuracy plot when I removed one dropout implementation (out of 2) from my model.

    • Jason Brownlee December 1, 2017 at 7:31 am #

      Perhaps it would be good to explore other configurations of the model and the test harness?

  29. George November 30, 2017 at 10:06 pm #

    Hi Jason and thanks for your nice posts.

    I want to ask you a question on how to interpret these results.

    https://ibb.co/hYyYvG

    https://ibb.co/dR3DUb

    I am using a network with keras.

    I have 2 layers, each of them with 128 units and the final layer with 2 units.
    I am using an L2 regularization.I use adam optimizer.
    For fitting, I am using 100 epochs, batch_size 32 and validation split 0.3.
    My data consists of 15000 rows with 5 features plus the output.

    I am not sure if I am overfitting or not.
    And I can’t find out why I have so many fluctuations with my validation data.I tried a lot of different approaches but the fluctuation never goes away.

    Generally, I know that we must not have big gaps/differences between train and validation data.I am not sure for the accuracy though.Should we always obtain a little better accuracy for the validation data?Else, is a sign of overfitting?

    Could you please elaborate on that?

    Thanks!

  30. Ahmed January 22, 2018 at 2:51 pm #

    hi jason,
    thank u,
    what about loss and accuracy of object detection problems such as running ssd_keras for object detection, is it possible to follow same steps ?

    • Jason Brownlee January 23, 2018 at 7:49 am #

      Sorry, I don’t follow, can you rephrase or perhaps provide more context?

  31. Ashima January 30, 2018 at 8:12 pm #

    Jason,
    I wish to have the average of the errors generated during my training as well so that once I start running the model on my validation set I can compare the error generated at each step with this average that I have. How is it possible to get this average RMSE value for the entire training data

    • Jason Brownlee January 31, 2018 at 9:41 am #

      Not sure I follow, sorry. Perhaps you can give more context?

  32. Maïsseka February 26, 2018 at 10:24 pm #

    Hi.

    Thanks. I would like to know : why is the training loss not as good as validation loss at the beginning ? Is it because of the dropout used ?

  33. stan March 3, 2018 at 2:59 pm #

    if it is a Plot of Model Loss on Training and Validation Datasets

    If plot involves ^; why are you adding train and test as the legends to the plot. isn’t the misleading?

    • Jason Brownlee March 4, 2018 at 6:00 am #

      Here I refer to “test” as a generic out of sample dataset.

      Does that help?

  34. Fabrício Melo March 4, 2018 at 10:29 pm #

    Hi, Jason!
    How can I plot accuracy versus batch size during the Model Training in Keras History ?

    • Fabrício Melo March 4, 2018 at 10:29 pm #

      Using calback

      • Jason Brownlee March 5, 2018 at 6:24 am #

        I would not recommend using a callback to create the plot.

    • Jason Brownlee March 5, 2018 at 6:23 am #

      Collect an array of mean accuracy scores and an array of batch sizes and use matplotlib to create a plot.

  35. kelvin March 7, 2018 at 8:40 am #

    As your example as that there is only training and validation loss and accuracy. May I ask that how to plot the loss and accuracy of training, validation and testing?

    • kelvin March 7, 2018 at 8:43 am #

      *shows

    • Jason Brownlee March 7, 2018 at 3:02 pm #

      You can plot the loss over train and test sets for each training epoch (e.g. over time).

  36. alfred April 2, 2018 at 2:57 am #

    Hi Jason,

    How would that history be studied in a regression model? How could the loss in the training and in the validation set be visualized? In my case, when I do:

    print(history.history.keys())

    All I get is two values:

    dict_keys([‘mean_absolute_error’, ‘loss’])

    So I am not able to plot the validation set loss. I’ve fitted and evaluated the model with:

    history = model.fit(X_train, Y_train, epochs=50, batch_size=30)

    loss_and_metrics = model.evaluate(X_test, Y_test, batch_size=12)

  37. MLT June 13, 2018 at 6:30 am #

    Hi Jason,
    It is a nice article to introduce history in Keras. I have a question if this history will also work for multiple step time series prediction. For example, use last two hours data to predict next two hours f(x(n-1), x(n))= x(n+1), x(n+2)

    y has two values, but history[‘loss_val’] only one value. If this history[‘loss_val’] is the sum of the loss of the two hours prediction?

    I have check keras website, but I did not find answer for it. Thanks in advance.

    • Jason Brownlee June 13, 2018 at 3:03 pm #

      Good question.

      It might be the average or sum loss over the vector output? Just a guess.

  38. rajesh bulla June 18, 2018 at 8:03 pm #

    can we plot same for testing. that is model.evaluate()

  39. Mahbubur Rub Talha July 1, 2018 at 3:16 pm #

    Hi Jason

    Your tutorials are just awesome. Thanks for your effort.

    I’m trying to plot model loss and accuracy for my model. In history variable ‘loss’ and ‘val_loss’ are exists. But when I try to access ‘acc’ or ‘val_acc’ it raises a key error. I printed all keys. Please check output below

    val_loss
    val_dense_3_loss_1
    val_dense_3_loss_2
    ……
    val_dense_3_loss_14
    val_dense_3_loss_15
    val_dense_3_acc_1
    val_dense_3_acc_2
    …..
    val_dense_3_acc_14
    val_dense_3_acc_15
    loss
    dense_3_loss_1
    dense_3_loss_2
    ……
    dense_3_loss_14
    dense_3_loss_15
    dense_3_acc_1
    dense_3_acc_2
    ……
    dense_3_acc_14
    dense_3_acc_15

    What i missed ?

    • Jason Brownlee July 2, 2018 at 6:21 am #

      You must add metrics=[‘accuracy’] when you compile() your model.

      • Talha July 2, 2018 at 2:33 pm #

        Thanks for your reply.

        Yes, I have added this. Please check my implementation below

        model.compile(loss=’categorical_crossentropy’, optimizer=opt, metrics=[‘accuracy’])

        history = model.fit(inputs, outputs, validation_split=0.2, epochs=epochs, batch_size=batch_size)

        One thing, I’m getting dense_3_acc_n from history.history.keys(). If I take the average of dense_3_acc_1 to dense_3_acc_n, I will get average accuracy. will it calculate actual accuracy?

        • Jason Brownlee July 2, 2018 at 3:00 pm #

          I recommend focusing on ‘acc’ and ‘val_acc’ keys.

  40. vivek July 16, 2018 at 9:52 pm #

    Hi, Jason can you please tell how to plot these graphs by loading the saved model ( hdf5 format using model.save(‘filename’) command). Because when I tried to plot with saved model it gives me error ‘history is not defined’

    • Jason Brownlee July 17, 2018 at 6:18 am #

      You can only get the graphs by calling fit() with data to train the model.

  41. Vivek July 19, 2018 at 7:22 pm #

    Hi, Thanks for reply. can you please tell how can I plot test accuracy and loss with training and validation

Leave a Reply