Display Deep Learning Model Training History in Keras

You can learn a lot about neural networks and deep learning models by observing their performance over time during training.

Keras is a powerful library in Python that provides a clean interface for creating deep learning models and wraps the more technical TensorFlow and Theano backends.

In this post you will discover how you can review and visualize the performance of deep learning models over time during training in Python with Keras.

Let’s get started.

  • Update Mar/2017: Updated example for Keras 2.0.2, TensorFlow 1.0.1 and Theano 0.9.0.
Display Deep Learning Model Training History in Keras

Display Deep Learning Model Training History in Keras
Photo by Gordon Robertson, some rights reserved.

Access Model Training History in Keras

Keras provides the capability to register callbacks when training a deep learning model.

One of the default callbacks that is registered when training all deep learning models is the History callback. It records training metrics for each epoch. This includes the loss and the accuracy (for classification problems) as well as the loss and accuracy for the validation dataset, if one is set.

The history object is returned from calls to the fit() function used to train the model. Metrics are stored in a dictionary in the history member of the object returned.

For example, you can list the metrics collected in a history object using the following snippet of code after a model is trained:

For example, for a model trained on a classification problem with a validation dataset, this might produce the following listing:

We can use the data collected in the history object to create plots.

The plots can provide an indication of useful things about the training of the model, such as:

  • It’s speed of convergence over epochs (slope).
  • Whether the model may have already converged (plateau of the line).
  • Whether the mode may be over-learning the training data (inflection for validation line).

And more.

Beat the Math/Theory Doldrums and Start using Deep Learning in your own projects Today, without getting lost in “documentation hell”

Deep Learning With Python Mini-CourseGet my free Deep Learning With Python mini course and develop your own deep nets by the time you’ve finished the first PDF with just a few lines of Python.

Daily lessons in your inbox for 14 days, and a DL-With-Python “Cheat Sheet” you can download right now.   

Download Your FREE Mini-Course  


Visualize Model Training History in Keras

We can create plots from the collected history data.

In the example below we create a small network to model the Pima Indians onset of diabetes binary classification problem. This is a small dataset available from the UCI Machine Learning Repository. You can download the dataset and save it as pima-indians-diabetes.csv in your current working directory.

The example collects the history, returned from training the model and creates two charts:

  1. A plot of accuracy on the training and validation datasets over training epochs.
  2. A plot of loss on the training and validation datasets over training epochs.

The plots are provided below. The history for the validation dataset is labeled test by convention as it is indeed a test dataset for the model.

From the plot of accuracy we can see that the model could probably be trained a little more as the trend for accuracy on both datasets is still rising for the last few epochs. We can also see that the model has not yet over-learned the training dataset, showing comparable skill on both datasets.

Plot of Model Accuracy on Train and Validation Datasets

Plot of Model Accuracy on Train and Validation Datasets

From the plot of loss, we can see that the model has comparable performance on both train and validation datasets (labeled test). If these parallel plots start to depart consistently, it might be a sign to stop training at an earlier epoch.

Plot of Model Loss on Training and Validation Datasets

Plot of Model Loss on Training and Validation Datasets


In this post you discovered the importance of collecting and reviewing metrics during the training of your deep learning models.

You learned about the History callback in Keras and how it is always returned from calls to the fit() function to train your models. You learned how to create plots from the history data collected during training.

Do you have any questions about model training history or about this post? Ask your question in the comments and I will do my best to answer.

Frustrated With Your Progress In Deep Learning?

 What If You Could Develop Your Own Deep Nets in Minutes

...with just a few lines of Python

Discover how in my new Ebook: Deep Learning With Python

It covers self-study tutorials and end-to-end projects on topics like:
Multilayer PerceptronsConvolutional Nets and Recurrent Neural Nets, and more...

Finally Bring Deep Learning To
Your Own Projects

Skip the Academics. Just Results.

Click to learn more.

34 Responses to Display Deep Learning Model Training History in Keras

  1. Marcel August 3, 2016 at 12:12 am #

    Thanks Jason!

  2. Randy September 12, 2016 at 4:27 am #

    Hi great. Is there also a possiblity to plot accuracy and loss for every sample in each epoch.

    For instance: 1 epoch, 60,000MNIST images => plot 60,000 accuracy/losses in that epoch?

    • Jason Brownlee September 12, 2016 at 8:34 am #

      The plots can do this in aggregate, you can calculate the loss and accuracy on each individual sample if you wish, but that would be a lot of data. I hope I have answered your question.

  3. Alvin September 30, 2016 at 3:38 pm #

    Hi Jason,

    Thanks for your great post!
    For the accuracy graph, what’s the indicator when it starts to get over-learned? What would the graph looks like when it happens?

    Thanks in advance

    • Jason Brownlee October 1, 2016 at 8:01 am #

      Hi Alvin, great question.

      If the model is overfitting the graph will show great performance on the training data and poor performance on the test data.

  4. Suny October 6, 2016 at 3:43 am #

    Great tutorial and very articulate around how each of the network in keras works.
    I had a quick question:
    does keras support this kind of dataset for implementing an autoencoder rather than a FFN ?


    • Jason Brownlee October 6, 2016 at 9:39 am #

      Hi Suny,

      Keras does support autoencoders, but I don’t use them generally as they are been surpassed by big MLPs and specialized methods like LSTMs and CNNs that can learn features while training.

  5. Yuanliang Meng November 4, 2016 at 2:26 pm #

    Hello Jason (and all).
    When dropout is applied, I wonder how the loss and acc values are computed. After each epoch, does the program still drop the neurons/weights to compute the loss and accuracy, or use the whole network?

    • Jason Brownlee November 5, 2016 at 7:28 am #

      Great question,

      Dropout is only applied during training (backward pass), not on making predictions (forward pass).

  6. Bo November 10, 2016 at 10:16 am #

    Hi Jason,

    Thanks for all of the great tutorials!
    I’d like to be able to plot the history of a stateful LSTM. I’ve tried something like the below, but in this case it’s failing b/c I’m asking python dicts to do something they don’t like (I’m new to python). I’ve tried some other approaches which have all failed for python-related reasons.

    Reprinting your .fit() code from your stateful tutorial (and adding a failed attempt at capturing history):

    my_history = {}
    for i in range(100):
    history = model.fit(trainX, trainY, nb_epoch=1, batch_size=batch_size, verbose=2, shuffle=False)

    What am I doing wrong here? Thanks!

    • Jason Brownlee November 11, 2016 at 9:58 am #

      Very interesting idea Bo.

      Consider using a list and appending the history object to the list. Also consider just creating an all new model each iteration to try and keep it all apples to apples comparison.

      Let me know how you go and what you discover!

  7. nagendra somanath December 5, 2016 at 10:52 am #

    How can one display the neural net used in keras ?. Is there a simple way to plot the network

  8. Aviel December 6, 2016 at 6:13 pm #

    Hi Jason,

    I would like to visualize loss and accuracy graphs per each epoch during training.
    I was thinking of doing so by writing a callback but wasn’t sure exactly how and if this can be done.
    What do you suggest?

    • Jason Brownlee December 7, 2016 at 8:55 am #

      Hi Aviel, Keras is not designed to do this natively.

      Maybe use a callback to post to a file/db and use a separate process to plot?

      I would suggest getting something ghetto like that going and see how it looks.

  9. Charlie Parker February 25, 2017 at 5:42 am #


    First, thanks so much for the tutorial!

    I have a quick question. I want to plot the graphs but my computing resources are **not** local. Is there a way to have a callback or something that stored each error value in a CSV file and later plot it? Or is there a way idk to save history object, maybe pickle it and then send to my local computer with some standard tool, like rsync or dropbox?

    What do you recommend for these remote plotting experiments? I just need to get the data somewhere I can plot the error/loss vs epochs.

    (also, can I plot vs iterations instead of epochs? just curious)

    • Jason Brownlee February 25, 2017 at 6:03 am #

      Hi Charlie,

      You can store the history in an array and then create and save the plot as a file, without a display.

  10. pattijane April 1, 2017 at 10:39 pm #


    I have a very simple question and I hope you don’t mind me asking, I want to save loss function figure with plt.savefig(“figure”), but I get module is not callable error, and if I comment out plt.savefig(“figure”) everything works just fine. Do you happen to have any idea why?

    Thanks a lot!

    • Jason Brownlee April 2, 2017 at 6:28 am #

      Ensure you have matplotlib installed and configured correctly.

      • pattijane May 12, 2017 at 5:57 am #


        I solved the error, thanks! I have an another issue however, I’m doing a grid search on parameters (epoch and batch size) and for each combination I plot the loss function. However, for each combination it just keeps displaying each results on top each other in the same figure! Any idea why that might happen?

        • Jason Brownlee May 12, 2017 at 7:50 am #

          Sorry, I don’t have experience capturing history within a grid search.

          I would recommend writing your own for-loops/grid search so you have more flexibility.

  11. Dave April 23, 2017 at 8:39 am #


    Great work.

    Quick question. I am using tensorflow without Keras at the moment, and am plotting the loss and accuracy of a CNN. I am using cross entropy with adam optimizer, and using the cross entropy value as the loss. Is this right?
    Also, if the loss is in the 200-300 range, should I be plotting the log value of this? as all the graphs I see the loss is between 0-1.


  12. Caleb Everett April 27, 2017 at 1:50 pm #

    Hello, thank you for all the great information. Can you provide any suggestions on how to access the training history if the Keras model is part of a pipeline?

    Thank you,


    • Jason Brownlee April 28, 2017 at 7:33 am #

      Sorry, I have not done this myself. You may need to extend the sklearn wrapper and capture this information manually.

  13. Anastasios Selalmazidis May 2, 2017 at 1:01 am #

    Hi Jason,

    I am running this example from your book, but I am using cross_validation in particular StratifiedKFold. So when I fit the model I do not pass a validation_split or validation_data hence my model history has only keys [‘acc’, ‘loss’]. I am using model.evaluate(). How can I visualize the test ?

    • Jason Brownlee May 2, 2017 at 6:00 am #

      You could hold back a validation test or you could collect history of each model evaluated on each cross validation fold.

  14. Nir June 4, 2017 at 8:39 pm #

    Hi Jason!
    i have two problems:
    1) when setting verbose to 2, i expect printing during each epoc including progressing bar, but i see only the train and validation loss (without seeing the accuracy or progressing bar)

    2) when the run reaches the part of trying to plot, i receive an error:
    KeyError: ‘acc’
    Exception ignored in: <bound method BaseSession.__del__ of >
    Traceback (most recent call last):
    File “C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\client\session.py”, line 582, in __del__
    UnboundLocalError: local variable ‘status’ referenced before assignment

    thanks !

    • Jason Brownlee June 5, 2017 at 7:41 am #

      That is correct, if you want a progress bar set verbose=1.

      You must add the accuracy metric to the fit function. The error suggests this was not done. Learn more about metrics here:

      • Nir June 6, 2017 at 2:37 am #

        Hi Jason, thanks a lot, I still have a few more questions:
        a. How can I plot the ROC curve using history object?
        b. How can I save best model after each epoch? (overwrite my model with a new one only if the accuracy over the validation set has improved)


  15. Mirza Mohtashim Alam June 8, 2017 at 7:40 am #

    Can you please tell how can I keep the history of classifier.fit_generator() function.

  16. Arbish June 13, 2017 at 6:37 am #

    Hi jason!

    i want to access model training history in tflearn to plot graphs.
    How can we do this in tflearn?

Leave a Reply