Save and Load Your Keras Deep Learning Models

Keras is a simple and powerful Python library for deep learning.

Given that deep learning models can take hours, days and even weeks to train, it is important to know how to save and load them from disk.

In this post, you will discover how you can save your Keras models to file and load them up again to make predictions.

Let’s get started.

  • Update Mar 2017: Added instructions to install h5py first. Added missing brackets on final print statement in each example.
  • Update Mar/2017: Updated example for Keras 2.0.2, TensorFlow 1.0.1 and Theano 0.9.0.
Save and Load Your Keras Deep Learning Models

Save and Load Your Keras Deep Learning Models
Photo by art_inthecity, some rights reserved.

Tutorial Overview

Keras separates the concerns of saving your model architecture and saving your model weights.

Model weights are saved to HDF5 format. This is a grid format that is ideal for storing multi-dimensional arrays of numbers.

The model structure can be described and saved using two different formats: JSON and YAML.

In this post we are going to look at two examples of saving and loading your model to file:

  • Save Model to JSON.
  • Save Model to YAML.

Each example will also demonstrate saving and loading your model weights to HDF5 formatted files.

The examples will use the same simple network trained on the Pima Indians onset of diabetes binary classification dataset. This is a small dataset that contains all numerical data and is easy to work with. You can download this dataset and place it in your working directory with the filename “pima-indians-diabetes.csv“.

Confirm that you have the latest version of Keras installed (v1.2.2 as of March 2017).

Note: You may need to install h5py first:

Need help with Deep Learning in Python?

Take my free 2-week email course and discover MLPs, CNNs and LSTMs (with sample code).

Click to sign-up now and also get a free PDF Ebook version of the course.

Start Your FREE Mini-Course Now!

Save Your Neural Network Model to JSON

JSON is a simple file format for describing data hierarchically.

Keras provides the ability to describe any model using JSON format with a to_json() function. This can be saved to file and later loaded via the model_from_json() function that will create a new model from the JSON specification.

The weights are saved directly from the model using the save_weights() function and later loaded using the symmetrical load_weights() function.

The example below trains and evaluates a simple model on the Pima Indians dataset. The model is then converted to JSON format and written to model.json in the local directory. The network weights are written to model.h5 in the local directory.

The model and weight data is loaded from the saved files and a new model is created. It is important to compile the loaded model before it is used. This is so that predictions made using the model can use the appropriate efficient computation from the Keras backend.

The model is evaluated in the same way printing the same evaluation score.

Running this example provides the output below.

The JSON format of the model looks like the following:

Save Your Neural Network Model to YAML

This example is much the same as the above JSON example, except the YAML format is used for the model specification.

The model is described using YAML, saved to file model.yaml and later loaded into a new model via the model_from_yaml() function. Weights are handled in the same way as above in HDF5 format as model.h5.

Running the example displays the following output:

The model described in YAML format looks like the following:

Further Reading


In this post, you discovered how to serialize your Keras deep learning models.

You learned how you can save your trained models to files and later load them up and use them to make predictions.

You also learned that model weights are easily stored using  HDF5 format and that the network structure can be saved in either JSON or YAML format.

Do you have any questions about saving your deep learning models or about this post? Ask your questions in the comments and I will do my best to answer them.

Frustrated With Your Progress In Deep Learning?

Deep Learning with Python

 What If You Could Develop A Network in Minutes

…with just a few lines of Python

Discover how in my new Ebook: Deep Learning With Python

It covers self-study tutorials and end-to-end projects on topics like:
Multilayer PerceptronsConvolutional Nets and Recurrent Neural Nets, and more…

Finally Bring Deep Learning To
Your Own Projects

Skip the Academics. Just Results.

Click to learn more.

113 Responses to Save and Load Your Keras Deep Learning Models

  1. Onkar August 31, 2016 at 4:21 pm #

    Hi Jason,

    I am grateful you for sharing knowledge through this blog. It has been very helpful for me.
    Thank you for the effort.

    I have one question. When I am executing keras code to load YAML / JSON data i am seeing following error.

    Traceback (most recent call last):
    File “”, line 158, in
    loaded_model = model_from_yaml(loaded_model_yaml)
    File “/usr/local/lib/python2.7/dist-packages/Keras-1.0.4-py2.7.egg/keras/”, line 26, in model_from_yaml
    return layer_from_config(config, custom_objects=custom_objects)
    File “/usr/local/lib/python2.7/dist-packages/Keras-1.0.4-py2.7.egg/keras/utils/”, line 35, in layer_from_config
    return layer_class.from_config(config[‘config’])
    File “/usr/local/lib/python2.7/dist-packages/Keras-1.0.4-py2.7.egg/keras/”, line 781, in from_config
    layer = get_or_create_layer(first_layer)
    File “/usr/local/lib/python2.7/dist-packages/Keras-1.0.4-py2.7.egg/keras/”, line 765, in get_or_create_layer
    layer = layer_from_config(layer_data)
    File “/usr/local/lib/python2.7/dist-packages/Keras-1.0.4-py2.7.egg/keras/utils/”, line 35, in layer_from_config
    return layer_class.from_config(config[‘config’])
    File “/usr/local/lib/python2.7/dist-packages/Keras-1.0.4-py2.7.egg/keras/engine/”, line 896, in from_config
    return cls(**config)
    File “/usr/local/lib/python2.7/dist-packages/Keras-1.0.4-py2.7.egg/keras/layers/”, line 290, in __init__
    self.init = initializations.get(init)
    File “/usr/local/lib/python2.7/dist-packages/Keras-1.0.4-py2.7.egg/keras/”, line 109, in get
    ‘initialization’, kwargs=kwargs)
    File “/usr/local/lib/python2.7/dist-packages/Keras-1.0.4-py2.7.egg/keras/utils/”, line 14, in get_from_module
    Exception: Invalid initialization:

    What could be the reason ? File is getting saved properly but at the time of loading model I am facing this issue.
    Can you please give me any pointers ?


    • Jason Brownlee September 1, 2016 at 7:57 am #

      Sorry Onkar, the fault is not clear.

      Are you able to execute the example in the tutorial OK?

      • Ridhesh January 11, 2018 at 4:12 pm #

        Hi Jason,

        Nice post with helpful steps to save and evaluate model. How do I run the saved model on NEW data without having to re-train it on new data? lets say I have linear regression y=mx+c trained on set of x, once I obtain m and c for given y and x, only thing I need to do is input NEW x and get predicted y with same m and c. I am unable to use LSTM model on these lines.

        Thanks you in advance for your help and comments.

        • Jason Brownlee January 12, 2018 at 5:52 am #

          Load the model and make predictions:


          Perhaps I don’t understand the problem?

  2. Walid Ahmed September 27, 2016 at 1:54 am #

    your code worked fine,
    I tried to add saving model to my code but the files were not actually created alyhough I got no error messages

    please advice


    • Jason Brownlee September 27, 2016 at 7:45 am #

      I expect the files were created. Check your current working directory / the dir where the source code files are located.

  3. Peter September 27, 2016 at 1:43 pm #

    Hi Jason,

    Thanks for creating this valuable content.

    On my Mac (OSX10.11), the script ran fine until the last line, in which it gave a syntax error below:

    >>> print “%s: %.2f%%” % (loaded_model.metrics_names[1], score[1]*100)
    File “”, line 1
    print “%s: %.2f%%” % (loaded_model.metrics_names[1], score[1]*100)
    SyntaxError: invalid syntax

    What could be the issue here?


    • Jason Brownlee September 28, 2016 at 7:37 am #

      Hi Peter, you may be on Python3, try adding brackets around the argument to the print functions.

  4. Deployment September 29, 2016 at 1:03 am #


    Your blog and books were great, and thanks much to you I finally got my project working in Keras.

    I can’t seem to find how to translate a Keras model in to a standalone code that can run without Keras installed.

    The best I could find was to learn TensorFlow, build an equivalent model in TF, then use TF to create standalone code.

    Does Keras not have such functionality?


    • Jason Brownlee September 29, 2016 at 8:37 am #

      Hi, my understanding is that Keras is required to use the model in prediction.

      You could try to save the network weights and use them in your own code, but you are creating a lot of work for yourself.

      • vishnu prasad July 3, 2017 at 11:00 am #

        Thanks Jason for this incredible blog.
        Is saving this model and reload only possible with keras or even in other skilearn models like kmeans Etc?
        When I have a few classifies used which are also onehotencoded like salary grade or country etc, Lets say u saved model but how can I apply same encoding and featurescaling on input data for which am expected to give output?
        E. G. I may have trained a cancer outcome based model based on country, gender, smoking and drinking status like often, occasional, rare etc. Now when I get new record how to ensure my encoding and featurescaling is aligned with my training set and convert this to get Prediction?
        Thanks in advance for your help.

  5. Davood November 22, 2016 at 10:48 am #

    Hello Jason,

    Thanks for your great and very helpful website.

    Since in here you talked about how to save a model, I wanted to know how we can save an embedding layer in the way that can be seen in a regular word embeddings file (i.e. text file or txt format). Let’s assume we either learn these word embeddings in the model from scratch or we update those pre-trained ones which are fed in the first layer of the model.

    I truly appreciate your response in advance.


    • Jason Brownlee November 23, 2016 at 8:49 am #

      I’m not sure we need to save embedding layers Davood.

      I believe they are deterministic and can just be re-created.

      • Davood November 23, 2016 at 10:51 am #

        I guess we should be able to save word embeddings at one point (not needed always though!). To visualize/map them in a (2D) space or to test algebraic word analogies on them can be some examples of this need.

        I found the answer for this and I’m sharing this here:

        If we train an embedding layer emb (e.g. emb = Embedding(some_parameters_here) ), we can get the resulting word-by-dimension matrix by my_embeddings = emb.get_weights(). Then, we can do normal numpy things like“my_embeddings.npy”, my_matrix) to save this matrix; or use other built-in write_to_a_file functions in Python to store each line of this matrix along with its associated word. These words and their indices are typically stored in a word_index dictionary somewhere in the code.

        • Jason Brownlee November 24, 2016 at 10:36 am #

          Very nice, thanks for sharing the specifics Davood.

          • Davood November 29, 2016 at 7:25 pm #

            You are very welcome Jason.
            However I have another question here!
            Let’s assume we two columns of networks in keras and these two columns are exactly the same. These two are going to merge on the top and then feed into a dense layer which is the output layer in our model. My question is, while the first layer of each column here is an embedding layer, how can we share the the weights of the similar layers in the columns? No need to say that we set our embedding layers (first layers) in a way that we only have one embedding matrix. What I mean is shared embeddings, something like this:
            emb1 = Embedding(some_parameters_here)
            emb2 = emb1 # instead of emb2 = Embedding(some_other_parameters_here)).
            How about the other layers on top of these two embedding layers?! How to share their weights?
            Thanks for your answer in advance.

          • Jason Brownlee November 30, 2016 at 7:55 am #

            Hmm, interesting Davood.

            I think, and could be wrong, that embedding layers are deterministic. They do not have state, only the weights in or out have state. Create two and use them side by side. Try it and see.

            I’d love to know how you go?

  6. Chao December 6, 2016 at 1:58 am #

    Hi Jason, thanks for your share, it helps me a lot. I’d like to ask a question, why the optimizer while compiling the model is adam, but uses rmsprop instead while compiling the loaded_model?

    • Jason Brownlee December 6, 2016 at 9:53 am #

      I would suggest trying many different optimizers and see what you like best / works best for your problem.

      I find ADAM is fast and gives good results.

  7. kl December 13, 2016 at 11:47 pm #

    I have difficulties finding an answer to this question:

    when are weights initialized in keras?

    at compile time ? (probably not)

    on first epoch ?

    This is important when resuming learning

    • Jason Brownlee December 14, 2016 at 8:28 am #

      Interesting question.

      I don’t know.

      If I had to guess, I would say at the model.compile() time when the data structures are created.

      It might be worth asking on the keras email list – I’d love to know the answer.

  8. Soheil December 22, 2016 at 8:44 pm #

    Thank you for creating such great blog.
    I saved a model with mentioned code. But when I wanted to load it again, I faced following error. It seems the network architecture was not save correctly?

    Exception Traceback (most recent call last)
    in ()
    1 # load weights into new model
    —-> 2 modelN.load_weights(“model41.h5”)
    3 print(“Loaded model from disk”)

    C:\Anaconda2\envs\py35\lib\site-packages\keras\engine\ in load_weights(self, filepath, by_name)
    2518 self.load_weights_from_hdf5_group_by_name(f)
    2519 else:
    -> 2520 self.load_weights_from_hdf5_group(f)
    2522 if hasattr(f, ‘close’):

    C:\Anaconda2\envs\py35\lib\site-packages\keras\engine\ in load_weights_from_hdf5_group(self, f)
    2570 ‘containing ‘ + str(len(layer_names)) +
    2571 ‘ layers into a model with ‘ +
    -> 2572 str(len(flattened_layers)) + ‘ layers.’)
    2574 # We batch weight value assignments in a single backend call

    Exception: You are trying to load a weight file containing 4 layers into a model with 5 layers.

    • Jason Brownlee December 23, 2016 at 5:31 am #

      Hi Soheil,

      It looks like the network structure that you are loading the weights into does not match the structure of the weights.

      Double check that the network structure matches exactly the structure that you used when you saved the weights. You can even save this structure as a json or yaml file as well.

  9. prajnya January 24, 2017 at 6:22 pm #

    Hi Jason,

    I have a question. Now that I have saved the model and the weights, is it possible for me to come back after a few days and train the model again with initial weights equal to the one I saved?

    • Jason Brownlee January 25, 2017 at 9:58 am #

      Great question prajnya.

      You can load the saved weights and continue training/update with new data or start making predictions.

      • Tharun July 9, 2017 at 2:31 am #

        Hi Jason,

        I tried to save and load a model trained for 5000 epochs and checked the performance of the model in the same session in comparison with the model performance just before saving after 5000 epochs. I any case using the above code I ended up with random results. But then I only saved the weights and instantiated the model again and loaded the weights with the argument “by_name” model.load_weights(‘model.h5’, by_name=True) then the accuracy as same as the model starting performance at 1st epoch/iteration. But in any case I am not able to replicate/reproduce!!! Request you if you could clarify this with a post, There is a post on github too but it is not yet resolved to satisfaction!!!

        • Jason Brownlee July 9, 2017 at 10:56 am #

          Sorry to hear that, I don’t have any good ideas. Perhaps post to stackoverflow?

      • Tharun July 9, 2017 at 2:33 am #

        the github post is at

  10. AKSHAY February 8, 2017 at 6:06 pm #

    Hi Jason,

    It is an amazing blog you have here. Thanks for the well documented works.
    I have a question regarding loading the model weights. Is there a way to save the weights into a variable rather than loading and assigning the weights to a different model?
    I wanted to do some operations on the weights associated with the intermediate hidden layer.

    I was anticipating on using ModelCheckpoint but I am a bit lost on reading weights from the hdf5 format and saving it to a variable. Could you please help me figure it out.


    • Jason Brownlee February 9, 2017 at 7:24 am #

      Great question, sorry I have not done this.

      I expect you will be able to extract them using the Keras API, it might be worth looking at the source code on github.

  11. Patrick March 1, 2017 at 3:53 am #

    Hi Jason

    thanks a lot for your excellent tutorials! Very much appreciated…

    Regarding the saving and loading: It seems that Keras as of now saves model and weights in HD5 rather than only the weights.

    This results in a much simpler snippet for import / export:


    from keras.models import load_model‘my_model.h5’) # creates a HDF5 file ‘my_model.h5’
    del model # deletes the existing model

    # returns a compiled model
    # identical to the previous one
    model = load_model(‘my_model.h5’)


    • Jason Brownlee March 1, 2017 at 8:42 am #

      Thanks Patrick, I’ll investigate and look at updating the post soon.

  12. Avik Moulik March 7, 2017 at 8:11 am #

    Getting this error:

    NameError: name ‘model_from_json’ is not defined

    Thanks in advance for any help on this.

    • Jason Brownlee March 7, 2017 at 9:38 am #

      Confirm that you have Keras 1.2.2 or higher installed.

  13. Chan April 13, 2017 at 4:58 am #

    I have saved my weights already in a txt file. Can I use it and load weights?

    • Jason Brownlee April 13, 2017 at 10:13 am #

      You may be able, I don’t have an example off-hand, sorry.

  14. M Amer April 20, 2017 at 2:45 am #

    Hi Jason,
    Thankyou for this great tutorial.
    I want to convert this keras model(model.h5) to tensorflow model(filename.pb) because I want to use it in android. I have used the github code that is:

    import keras
    import tensorflow
    from keras import backend as K
    from tensorflow.contrib.session_bundle import exporter
    from keras.models import model_from_config, Sequential

    print(“Loading model for exporting to Protocol Buffer format…”)
    model_path = “C:/Users/User/buildingrecog/model.h5”
    model = keras.models.load_model(model_path)

    K.set_learning_phase(0) # all new operations will be in test mode from now on
    sess = K.get_session()

    # serialize the model and get its weights, for quick re-building
    config = model.get_config()
    weights = model.get_weights()

    # re-build a model where the learning phase is now hard-coded to 0
    new_model = Sequential.model_from_config(config)

    export_path = “C:/Users/User/buildingrecog/khi_buildings.pb” # where to save the exported graph
    export_version = 1 # version number (integer)

    saver = tensorflow.train.Saver(sharded=True)
    model_exporter = exporter.Exporter(saver)
    signature = exporter.classification_signature(input_tensor=model.input, scores_tensor=model.output)
    model_exporter.init(sess.graph.as_graph_def(), default_graph_signature=signature)
    model_exporter.export(export_path, tensorflow.constant(export_version), sess)

    but has the following error…

    Loading model for exporting to Protocol Buffer format…
    ValueError Traceback (most recent call last)
    in ()
    7 print(“Loading model for exporting to Protocol Buffer format…”)
    8 model_path = “C:/Users/User/buildingrecog/model.h5”
    —-> 9 model = keras.models.load_model(model_path)
    11 K.set_learning_phase(0) # all new operations will be in test mode from now on

    C:\Users\User\Anaconda3\lib\site-packages\keras\ in load_model(filepath, custom_objects)
    228 model_config = f.attrs.get(‘model_config’)
    229 if model_config is None:
    –> 230 raise ValueError(‘No model found in config file.’)
    231 model_config = json.loads(model_config.decode(‘utf-8’))
    232 model = model_from_config(model_config, custom_objects=custom_objects)

    ValueError: No model found in config file.

    Please help me to solve this…!!

    • Jason Brownlee April 20, 2017 at 9:32 am #

      Sorry, I don’t know how to load keras models in tensorflow off-hand.

    • Ravid April 28, 2017 at 12:29 am #

      M Amer,

      I ma trying to do exactly the same thing. Please let us know if you figure it out.

  15. M Amer April 21, 2017 at 1:42 am #

    Hi Jason,
    I have created the keras model file (.h5) unfortunately it can’t be loaded. But I want to load it and convert it into tensor flow (.pb) model.Any Solution? Waiting for your response….

    • Jason Brownlee April 21, 2017 at 8:39 am #

      Sorry, I don’t have an example of how to load a Keras model in TensorFlow.

  16. Sanjay April 24, 2017 at 2:41 am #

    Hi Jason,

    I am having issues with loading the model which has been saved with normalising (StandardScaler) the columns. Do you have to apply normalising (StandarScaler) when you load the models too?

    Here is the snippet of code: 1) Save and 2)Load

    import numpy as np
    import matplotlib.pyplot as plt
    import pandas as pd

    # Importing the dataset
    dataset = pd.read_csv(‘Churn_Modelling.csv’)
    X = dataset.iloc[:, 3:13].values
    y = dataset.iloc[:, 13].values

    # Encoding categorical data
    from sklearn.preprocessing import LabelEncoder, OneHotEncoder
    labelencoder_X_1 = LabelEncoder()
    X[:, 1] = labelencoder_X_1.fit_transform(X[:, 1])
    labelencoder_X_2 = LabelEncoder()
    X[:, 2] = labelencoder_X_2.fit_transform(X[:, 2])
    onehotencoder = OneHotEncoder(categorical_features = [1])
    X = onehotencoder.fit_transform(X).toarray()
    X = X[:, 1:]

    # Splitting the dataset into the Training set and Test set
    from sklearn.model_selection import train_test_split
    X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 0)

    # Feature Scaling
    from sklearn.preprocessing import StandardScaler
    sc = StandardScaler()
    X_train = sc.fit_transform(X_train)
    X_test = sc.transform(X_test)

    # Importing the Keras libraries and packages
    import keras
    from keras.models import Sequential
    from keras.layers import Dense

    # Initialising the ANN
    classifier = Sequential()

    classifier.add(Dense(units = 6, kernel_initializer = ‘uniform’, activation = ‘relu’, input_dim = 11))
    classifier.add(Dense(units = 6, kernel_initializer = ‘uniform’, activation = ‘relu’))
    classifier.add(Dense(units = 1, kernel_initializer = ‘uniform’, activation = ‘sigmoid’))
    classifier.compile(optimizer = ‘adam’, loss = ‘binary_crossentropy’, metrics = [‘accuracy’])

    # Fitting the ANN to the Training set, y_train, batch_size = 10, epochs = 1)

    # Predicting the Test set results
    y_pred = classifier.predict(X_test)
    y_pred = (y_pred > 0.5)

    # Saving your model“ann_churn_model_v1.h5”)

    import numpy as np
    import matplotlib.pyplot as plt
    import pandas as pd

    # Reuse churn_model_v1.h5
    import keras
    from keras.models import load_model
    classifier = load_model(“ann_churn_model_v1.h5”)

    # Feature Scaling – Here I have a question whether to apply StandarScaler after loading the model?

    from sklearn.preprocessing import StandardScaler
    #sc = StandardScaler()

    new_prediction = classifier.predict(sc.transform(np.array([[0.0, 0.0, 600, 1, 40, 3, 60000, 2, 1, 1, 50000]])))
    new_prediction = (new_prediction > 0.5)


    • Jason Brownlee April 24, 2017 at 5:38 am #

      You will also need to save your scaler.

      Perhaps you can pickle it or just the coefficients (min/max for each feature) needed to scale data.

  17. Rohit April 28, 2017 at 9:01 pm #

    Thanks for the useful information.

    Is it possible to load this model and weights to any other platform, for example Android or iOS. I believe that model and weights are language independent.

    Are there any free / open source solutions for this purpose?

    • Jason Brownlee April 29, 2017 at 7:24 am #

      I don’t see why not. Sorry, I not across the android or ios platforms.

  18. Kshitij Deshmukh May 12, 2017 at 11:57 pm #

    Hi Jason,

    How can I create a model out of face recognition encodings to save using method?

  19. Lotem May 15, 2017 at 6:33 pm #

    Hey Jason, have you tried saving a model, closing the python session, then opening a new python session and then loading a model?

    Using python 3.5, if I save a trained model in one session and load it in another, my accuracy drops dramatically and the predictions become random (as if the model wasn’t trained).

    This is what I’m trying to do:
    embedding_size = 64
    hidden_size = 64
    input_length = 100
    learning_rate = 0.1
    patience = 3
    num_labels = 6
    batch_size= 50
    epochs = 100
    seq_len = 100′

    model = Sequential()
    model.add(Embedding(vocab_size, embedding_size, input_length=input_length))
    model.add(Bidirectional(GRU(hidden_size, return_sequences=True, activation=”tanh”)))
    model.add(TimeDistributed(Dense(num_labels, activation=’softmax’)))
    optimizer = Adagrad(lr=learning_rate)
    model.compile(loss=’categorical_crossentropy’, optimizer=optimizer, metrics=[‘categorical_accuracy’])
    callbacks = [EarlyStopping(monitor=’val_loss’, patience=patience, verbose=0)], y_train, batch_size=batch_size, epochs = epochs, callbacks=callbacks, validation_data=[x_dev, y_dev])“model.h5″)

    Evaluating the model in this point gives me accuracy of ~70.

    Then I exit python, open a new python session, and try:

    model2 = load_mode(‘model_full.h5′)

    Evaluating the model in this point gives me accuracy of ~20.

    Any ideas?

    • Jason Brownlee May 16, 2017 at 8:42 am #

      I have. I don’t believe it is related to the Python session.

      Off the cuff, my gut tells me something is different in the saved model.

      If you save and load in the same session is the result the same as prior to the save? What if you repeat the load+test process a few times?

      Confirm that you are saving the Embedding as well (I think it may need to be saved).

      Confirm that you are evaluating it on exactly the same data in the same order and in the same way.

      Neural nets are stochastic and a deviation could affect the internal state of your RNN and result in different results, perhaps not as dramatic as you are reporting through.

  20. Carl May 26, 2017 at 8:05 pm #

    I seem to get an error message

    RuntimeError: Unable to create attribute (Object header message is too large)

    Github issues states that the error could be due to too large network, which is the case here.. but

    How should i then save the weights… Keras doesn’t seem to have any alternative methods..

    • Jason Brownlee June 2, 2017 at 11:54 am #

      Sorry, I have not seen this error.

      See if you can save the weights with a smaller network on your system to try and narrow down the cause of the fault.

  21. nguyennguyen June 2, 2017 at 4:56 pm #

    Hey guys,
    I want to know how can i update value of model, like i have better model, version 2 and not need to stop service, with use of version 1 before in Keras. I want to say a module manager model, can update new version model, and not need to break service, or something like this. Thank you.

  22. George June 6, 2017 at 6:14 pm #

    Hi Jason,
    do you know if it’s possible saving the model only and every time it’s accuracy over the validation set has improved (after each epoch)?

    and is it possible checking the validation in a higher frequency than every epoch?


  23. Prathap June 13, 2017 at 5:45 am #

    Hi Dr. Jason,
    I am using keras with Tensorflow back-end. I have saved my model as you mentioned here. But the problem was it takes some longer time than expected to load the weights. I am only using a CPU (not a GPU) since my model is kind of a small model. Can you please let me know how to improve the loading time of the model? Compared to sci-kit learn pickled model’s loading time, this is very high (nearly about 1 minute).

    • Jason Brownlee June 13, 2017 at 8:26 am #

      That is a long time.

      Confirm that it is Keras causing the problem.

      Perhaps it is something else in your code?

      Perhaps you have a very slow HDD?

      Perhaps you are running out of RAM for some reason?

  24. Anastasios Selalmazidis June 18, 2017 at 6:54 am #

    Hi Jason,

    how can I save a model after gridsearch ? I keep getting errors: “AttributeError: ‘KerasClassifier’ object has no attribute ‘save'”

  25. AndreasM July 8, 2017 at 2:02 pm #

    Hi Jason,
    I have two questions,
    1) why you compile() the model a second time after load_weights() to reload the model from file?
    2) in both examples, you use one optimizer to compile() before the fit(), but pass a different optimizer to compile() after load_weights() , isn’t that problematic? if not, then why should we use different optimizer?

  26. Devakar July 18, 2017 at 3:48 pm #

    I saved the model and weights. Then I uploaded the model from other python script, its not working.Why?

    Its working if saving and loading the model is within same python script.

    I am puzzled with the behaviour. Any help, please.

  27. PandaN August 20, 2017 at 8:34 pm #

    Thanks for the article! Helped a lot..
    I had the following doubt though –

    The following from the Keras docs itself:

    You can use to save a Keras model into a single HDF5 file which will contain:

    – the architecture of the model, allowing to re-create the model
    – the weights of the model
    – the training configuration (loss, optimizer)
    – the state of the optimizer, allowing to resume training exactly where you left off.

    As it says, it also saves the training configuration (loss, optimizer) why are we again compiling after loading the model and weights? Why don’t we just directly evaluate on test data?

    • Jason Brownlee August 21, 2017 at 6:05 am #

      The API has changed since I wrote this tutorial. You can now save the model in one file and you no longer need to compile after loading.

  28. jan balewski August 30, 2017 at 4:15 pm #

    Hi Jason,
    I’m keras novice and I really enjoyed your short tutorials – I have learned a lot.
    Perhaps you can advice me how to push the concept of saving/loading net config for a more complex case, when 2 Sequential nets are merged to the new sequential net, sth like this:

    model1 = Sequential()

    model2 = Sequential()

    model3 = Sequential()
    model3.add(Merge([model1, model2], mode=’concat’))

    I can save/load each model[1-3] separately. But after I load the 3 pieces back I do not know how to glue them together? Can you help how to execute equivalence of
    model3.add(Merge([model1, model2], mode=’concat’)) after 3 sub-nets were read in from Yaml?
    Below is a simple toy code which is missing just this last step.
    Thanks in advance

    – – –
    #!/usr/bin/env python
    definse multi branch Sequential net with Merge()
    save net to yaml,
    reads it back (and looses some pieces)

    import os, time
    import warnings
    os.environ[‘TF_CPP_MIN_LOG_LEVEL’] = ‘3’ #Hide messy TensorFlow warnings
    warnings.filterwarnings(“ignore”) #Hide messy Numpy warnings

    from keras.datasets import mnist
    from keras import utils as np_utils
    from keras.models import Sequential, load_model, model_from_yaml
    from keras.callbacks import EarlyStopping, ModelCheckpoint
    from keras.layers import Dense, Dropout, Merge, LSTM
    import yaml


    inp_sh1=(10, 20)
    inp_sh2=(11, 22)
    print(‘build_model inp1:’,inp_sh1,’ inp2:’,inp_sh2)


    model1 = Sequential()
    model1.add(LSTM(lstm_na, input_shape=inp_sh1))

    model2 = Sequential()
    model2.add(LSTM(lstm_nb, input_shape=inp_sh2 ))

    model3 = Sequential()
    model3.add(Merge([model1, model2], mode=’concat’))
    model3.add(Dense(1, activation=’sigmoid’)) # predicts only 0/1

    # Compile model
    model3.compile(loss=’binary_crossentropy’, optimizer=’adam’, metrics=[‘accuracy’])

    print(‘input1:’,inp_sh1, ‘input2:’,inp_sh2)
    print(‘1st LSTM branch:’)
    model1.summary() # will print
    print(‘2nd LSTM branch:’)
    print(‘final Dense branch:’)

    print(“———– Save model as YAML ———–“)
    yamlRec1 = model1.to_yaml()
    yamlRec2 = model2.to_yaml()
    yamlRec3 = model3.to_yaml()

    with open(‘jan.model1.yaml’, ‘w’) as outfile:
    yaml.dump(yamlRec1, outfile)
    with open(‘jan.model2.yaml’, ‘w’) as outfile:
    yaml.dump(yamlRec2, outfile)
    with open(‘jan.model3.yaml’, ‘w’) as outfile:
    yaml.dump(yamlRec3, outfile)

    print(“———– Read model from YAML ———–“)
    with open(‘jan.model1.yaml’, ‘r’) as inpfile:
    model1b = model_from_yaml(yamlRec1b)
    model1b.summary() # will print

    with open(‘jan.model2.yaml’, ‘r’) as inpfile:
    model2b = model_from_yaml(yamlRec2b)
    model2b.summary() # will print

    with open(‘jan.model3.yaml’, ‘r’) as inpfile:
    model3b = model_from_yaml(yamlRec3b)
    model3b.summary() # will print

    • Jason Brownlee August 30, 2017 at 4:21 pm #

      Perhaps you can define your model using the function API and save it as one single model.

      Alternatively, perhaps you can load the individual models and use the function API to piece it back together.

      I have a post on the functional API scheduled, but until then, you can read about it here:

      • jan balewski August 31, 2017 at 12:23 pm #

        Thanks a lot Jason !
        After I switched to net = concatenate([net1,net2]) it works like a charm.
        I’m attaching working toy-example. Fell free to erase my previous not-working code.
        Thanks again

        – – – – –
        import yaml
        from keras.layers import Dense, LSTM, Input, concatenate
        from keras.models import Model, load_model, model_from_yaml

        input1 = Input(shape=(10,11), name=’inp1′)
        input2 = Input(shape=(20,22), name=’inp2′)
        print(‘build_model inp1:’,input1.get_shape(),’ inp2:’,input2.get_shape())
        net1= LSTM(60) (input1)
        net2= LSTM(40) (input2)
        net = concatenate([net1,net2],name=’concat-jan’)
        net=Dense(30, activation=’relu’)(net)
        outputs=Dense(1, activation=’sigmoid’)(net) # predicts only 0/1
        model = Model(inputs=[input1,input2], outputs=outputs)
        # Compile model
        model.compile(loss=’binary_crossentropy’, optimizer=’adam’, metrics=[‘accuracy’])
        model.summary() # will print

        print(“———– Save model as YAML ———–“)
        yamlRec = model.to_yaml()
        with open(‘jan.model.yaml’, ‘w’) as outfile:
        yaml.dump(yamlRec, outfile)

        print(“———– Read model from YAML ———–“)
        with open(‘jan.model.yaml’, ‘r’) as inpfile:
        model4 = model_from_yaml(yamlRec4)
        model4.summary() # will print

  29. Azam September 16, 2017 at 3:49 am #

    Hi, I have a five layer model. I have save the model later I want to load only the first four layers. would you plese tell me is it possible?

    • Jason Brownlee September 16, 2017 at 8:44 am #

      I would recommend loading the whole model and then re-defining it with the layer you do not want removed.

  30. sirisha September 19, 2017 at 10:39 am #

    I have trained a CNN containing 3 convolution layers and 3 maxpooling layers for text classification. First top n words are picked from the dataset containing 1000 documents and embedding matrix is constructed for them by looking for these words in Glove embeddings and appending the corresponding word vector to if the word is found in Glove embeddings.
    I tested the validation accuracy. I saved the model in h5 format.

    Now, I want to load the model in another python file and use to predict the class label of unseen document. I used the following code
    from keras.models import load_model
    import keras.preprocessing.text
    from keras.preprocessing.text import Tokenizer
    from keras.preprocessing.sequence import pad_sequences
    import numpy as np
    import json

    model1 = load_model(‘my_model.h5’)

    f = open(‘/home/siri/Japan_Project/preprocessing/complete_data_stop_words/technology/X.txt’, encoding=’latin-1′)
    text =

    tokenizer = Tokenizer(num_words=MAX_NUMBER_OF_WORDS)

    print(‘\n text = ‘)
    sequence_list = tokenizer.texts_to_sequences(text)
    print(‘\n text to sequences= ‘)

    data = pad_sequences(sequence_list, maxlen=MAX_SEQUENCE_LENGTH)

    print(‘\n np.array(data)’)
    prediction = model1.predict(np.array(data))

    y_classes = prediction.argmax(axis=-1)

    with open(‘data.json’, ‘r’) as fp:
    labels_index = json.load(fp)

    for k, v in labels_index.items():
    print(“\n key= “,k,”val= “,v)

    print(‘\n printing class label=’)
    for k, v in labels_index.items():
    if y_classes[0]==v:
    print(“\n key= “,k,”val= “,v)

    My doubt is I did not use word embeddings as input to model now, Instead I used numpy.array(data). Is it correct? Can we give word embeddings as input to predict function of keras.

    I also saved the class label index (dictionary of class labels) in data.json file after training. and loaded it back in this file. to know the class label of the prediction. Is it correct?

    • Jason Brownlee September 19, 2017 at 3:46 pm #

      I’m not sure I follow completely.

      Generally, word embeddings are we weights and must be saved and loaded as part of the model in the Embedding layer.

      Does that help?

      • sirisha September 19, 2017 at 10:54 pm #

        How to check if embedding layer is saved or not? If it is saved, how to give unseen text document as input to predict function?

        • Jason Brownlee September 20, 2017 at 5:56 am #

          If the Embedding layer is part of the model and you save the model, then the embedding layer will be saved with the model.

  31. Arnab Ganguly September 21, 2017 at 12:26 am #

    Hi Jason:

    I am able to load weights and the model as well as the label encoder and have verified that the test set gives the same predictions with the loaded model.

    My problem – to which I am not able to find a definitive answer even after searching – is that when a new input comes in how do I one-hot encode the categorical variables associated with this new input so that the order of the columns exactly matches the training data?

    Without being able to do this my accuracy on a set of new inputs is approaching 10% while the validation accuracy is 89%.

    Simple question is how to encode categorical variables so that the input data for the set of new-inputs matches the training set? Probably not a real deep learning question but without doing this my sophisticated LSTM model is just not working.

    Help will be greatly appreciated!

    • Jason Brownlee September 21, 2017 at 5:44 am #

      You must use the same encoding as was used during training.

      Perhaps you use your own transform code.
      Perhaps you save the transform object.
      Perhaps you re-create the transform when needed from training data and confirm that it is consistent.

      Does that help?

  32. Arnab Ganguly September 21, 2017 at 7:13 pm #

    Hi Jason,

    I am using pd.get_dummies to transform to a one hot encoded matrix. How do I reuse that as this is not a Label Encoder or a One Hot Encoder? My training set is 24000 rows and 5255 columns

    When I use pd.get_dummies on 3 new items I will get a 3 rows by 6 columns matrix and this cannot be fed into the model as the model expects 5255 rows. Padding with zeros to make up the shortfall is only ruining the case and the output accuracy is ranging in 10% range while validation accuracy is 89%; during validation there is no issue as the train test split is being done AFTER the pd.get dummies has executed and turned the input X into a one hot encoded matrix. This seems to be strange problem as ALL who use the trained model for prediction will have exactly the same problem with any one hot encoded model and thus a simple solution should have been found on the net.

    Is there a way to transform the pd.get_dummies to an encoder type object and reload and re-use the same on the real time data. That would make life very simple….

    Do let me know.


    • Jason Brownlee September 22, 2017 at 5:37 am #

      I would recommend using the sklearn encoding over the pandas method so that you can either save the object and/or easily reverse the operation.

  33. Nirmesh September 23, 2017 at 5:07 pm #


    I am getting following error when I applied this knowledge to my code

    raceback (most recent call last):
    File “”, line 198, in + SRC[sr] + ‘-‘ + TGT[tg] + ‘/DNNerect_25/’+str(NUTTS[nutt])+’/model.hdf5’)
    File “/usr/local/lib/python2.7/dist-packages/keras/engine/”, line 2429, in save
    save_model(self, filepath, overwrite)
    File “/usr/local/lib/python2.7/dist-packages/keras/”, line 109, in save_model
    topology.save_weights_to_hdf5_group(model_weights_group, model_layers)
    File “/usr/local/lib/python2.7/dist-packages/keras/engine/”, line 2708, in save_weights_to_hdf5_group
    g = f.create_group(
    File “/usr/lib/python2.7/dist-packages/h5py/_hl/”, line 41, in create_group
    gid = h5g.create(, name, lcpl=lcpl)
    File “h5g.pyx”, line 145, in h5py.h5g.create (h5py/h5g.c:2536)
    ValueError: unable to create group (Symbol table: Unable to initialize object)


    Can you please comment what can be possible reason ?

    • Jason Brownlee September 24, 2017 at 5:16 am #

      Sorry, the cause of the fault is not obvious.

      Perhaps post to stackoverflow?

  34. Srinivas BN October 15, 2017 at 5:02 pm #

    Hi Jason,

    Firstly very thanks for your attempt to write this valuable blog.Very great-full to you. I have a question as follows :

    1) I am using below code to train the data and target values to RNN using Keras for 1000000 epoch and save the trained model and weights to disk using the JSON and HDF5 as you mentioned in this blog. “This part works well”, and I am able to generate model.h5 and model.json in the same working directory. Now by using another python program in the same directory i want to use the trained model and weights, but for any values I pass to the trained model, I get the same output which I got while training. I tried to compile with new values still it didn’t help. Is there anything I can do? Here are the code that I have: [First file that trains for 1000000 epoch]
    import numpy as np

    from keras.models import Sequential
    from keras.layers import Dense
    from keras.layers import LSTM
    from keras.models import model_from_json

    data =[688,694.5,700.95,693,665.25,658,660.4,656.5,654.8,652.9,660,642.5,

    target = [691.6,682.3,690.8,697.25,691.45,661,659,660.8,652.55,649.7,649.35,654.1,639.75,654,687.1,687.65,676.4,672.9,678.95,

    #data = [688,694.5,700.95,693,665.25,658,660.4,656.5,654.8,652.9]
    data = np.array(data, dtype=float)
    #target = [691.6,682.3,690.8,697.25,691.45,661,659,660.8,652.55,649.7]
    target = np.array(target,dtype=float)

    data = data.reshape((1,1,len(data)))
    target = target.reshape((1,1,len(target)))

    x_test =[688,694.5,700.95,693,665.25,658,660.4,656.5,654.8,652.9,660,642.5,

    y_test =[700.95,693,665.25,658,660.4,656.5,654.8,652.9,660,642.5,

    model = Sequential()
    model.compile(loss=’mean_absolute_error’, optimizer=’adam’,metrics=[‘accuracy’]),target, nb_epoch=1000000, batch_size=1, verbose=2,validation_data=(x_test,y_test))

    # serialize model to JSON
    model_json = model.to_json()
    with open(“model.json”, “w”) as json_file:
    # serialize weights to HDF5
    print(“Saved model to disk”)

    # load json and create model
    json_file = open(‘model.json’, ‘r’)
    loaded_model_json =
    loaded_model = model_from_json(loaded_model_json)

    # load weights into new model
    print(“Loaded model from disk”)

    # Evaluate loaded model in test data
    score = loaded_model.evaluate(data,target, verbose=0)
    predict = loaded_model.predict(y_test)

    predict = loaded_model.predict(x_test)

    Output I get for

    [[[ 691.59997559 682.30004883 690.80004883 697.24987793 691.45007324
    661.00012207 658.99987793 660.80004883 652.55004883 649.70007324
    649.34997559 654.09997559 639.75 654. 687.09997559
    687.65002441 676.40002441 672.90002441 678.95007324 677.70007324
    679.65002441 682.90002441 662.59997559 655.40002441 652.80004883
    652.99987793 652.09997559 646.55004883 651.20007324 638.05004883
    638.65002441 630.20007324 635.84997559 639. 634.59997559
    619.59997559 621.55004883 625.65002441 625.40002441 631.20007324
    623.74987793 596.74987793 604.34997559 605.05004883 616.45007324
    600.05004883 575.84997559 559.30004883 569.25 572.40002441
    567.09997559 551.90002441 561.25012207 565.75012207 552.95007324
    548.50012207 553.24987793 557.20007324 571.20007324 563.30004883
    559.80004883 558.40002441 563.95007324]]]
    [[[ 691.59997559 682.30004883 690.80004883 697.24987793 691.45007324
    661.00012207 658.99987793 660.80004883 652.55004883 649.70007324
    649.34997559 654.09997559 639.75 654. 687.09997559
    687.65002441 676.40002441 672.90002441 678.95007324 677.70007324
    679.65002441 682.90002441 662.59997559 655.40002441 652.80004883
    652.99987793 652.09997559 646.55004883 651.20007324 638.05004883
    638.65002441 630.20007324 635.84997559 639. 634.59997559
    619.59997559 621.55004883 625.65002441 625.40002441 631.20007324
    623.74987793 596.74987793 604.34997559 605.05004883 616.45007324
    600.05004883 575.84997559 559.30004883 569.25 572.40002441
    567.09997559 551.90002441 561.25012207 565.75012207 552.95007324
    548.50012207 553.24987793 557.20007324 571.20007324 563.30004883
    559.80004883 558.40002441 563.95007324]]] [2nd file in the same directory of that like to reuse model.h5 and model.json]
    ======================================================================from keras.models import model_from_json
    import numpy as np

    #x_test =[688,694.5,700.95,693,665.25,658,660.4,656.5,654.8,652.9,660,642.5,

    x_test =[[i for i in range(63)]]


    # load json and create model
    json_file = open(‘model.json’, ‘r’)
    loaded_model_json =
    loaded_model = model_from_json(loaded_model_json)

    # load weights into new model
    print(“Loaded model from disk”)

    predict = loaded_model.predict(x_test)

    Output I get for

    [[[ 691.59997559 682.30004883 690.80004883 697.24987793 691.45007324
    661.00012207 658.99987793 660.80004883 652.55004883 649.70007324
    649.34997559 654.09997559 639.75 654. 687.09997559
    687.65002441 676.40002441 672.90002441 678.95007324 677.70007324
    679.65002441 682.90002441 662.59997559 655.40002441 652.80004883
    652.99987793 652.09997559 646.55004883 651.20007324 638.05004883
    638.65002441 630.20007324 635.84997559 639. 634.59997559
    619.59997559 621.55004883 625.65002441 625.40002441 631.20007324
    623.74987793 596.74987793 604.34997559 605.05004883 616.45007324
    600.05004883 575.84997559 559.30004883 569.25 572.40002441
    567.09997559 551.90002441 561.25012207 565.75012207 552.95007324
    548.50012207 553.24987793 557.20007324 571.20007324 563.30004883
    559.80004883 558.40002441 563.95007324]]]

    For different input values we expect different output values after recompiling. Why do we get same output of any input values?

    • Jason Brownlee October 16, 2017 at 5:42 am #

      The output of the network should be specific (contingent) to the input provided when making a prediction.

      If this is not the case, then perhaps your model has overfit the training data?

      • Srinivas BN October 16, 2017 at 5:11 pm #

        Hi Brownlee,

        Thanks for the quick reply. I was not able to understand “output of the network should be specific (contingent) to the input provided ” Could you explain it more..Perhaps i dint get the context correctly

  35. Falgun November 9, 2017 at 5:09 am #

    Hi Jason,

    Thanks for the amazing post. Really helps people who are new to ML.

    I am trying to run the below code

    model_json = model.to_json()
    with open(“model.json”, “w”) as json_file:

    getting the error as ‘NameError: name ‘model’ is not defined’

    Can you help ?

    • Jason Brownlee November 9, 2017 at 10:04 am #

      “model” will be the variable for your trained model.

  36. Santanu Dutta November 13, 2017 at 6:40 am #

    Fantastic post. I could save and retrieve in local. But in AWS lambda facing a problem of loading weights because of HDF5 format. Can you please suggest any resolution or work around for the same.

    • Jason Brownlee November 13, 2017 at 10:23 am #

      Sorry to hear that. I would have expected h5 format to be cross-platform. I believe it is.

      Perhaps it is a Python 2 vs Python 3 issue.

  37. SHEKINA November 14, 2017 at 6:26 pm #

    plz explain the python code for feature selection using meta heuristic algorithms like firefly algorithm,particle swarm optimization,brain storm optimization etc…

    • Jason Brownlee November 15, 2017 at 9:49 am #

      Thanks for the suggestion, I hope to cover the topic in the future.

  38. HyunWoo Cho December 14, 2017 at 4:49 pm #

    Should I compile the model for evaluation, after load model and weights

    • Jason Brownlee December 15, 2017 at 5:29 am #

      No need to compile after loading any more I believe, the API has changed.

  39. Shabnam January 2, 2018 at 5:07 pm #

    Thanks Jason for your post. I have a question.
    Is there any similar method to have an output file indicating that a model is compiled? Each time that I run my file, it takes time to compile the model and then fit and evaluate the data. I want to change parameters/variables/hyperparameters and run the file again, so I want to have speed up as mush as possible.

    • Jason Brownlee January 3, 2018 at 5:30 am #

      I believe you don’t need to compile the model any longer.

  40. Edoardo January 4, 2018 at 3:25 am #

    Hi Jason,
    Thank you for the invaluable help this blog provides developers,

    I am facing the same problem as @Lotem above.

    I have one script which builds a model (accuracy 60%) and saves it in a different directory.
    However, when I load the model back into another script the accuracy decreases to 55%, the predicted values are different.

    I have checked the weights and they are the same,

    I have set:

    from numpy.random import seed
    from tensorflow import set_random_seed

    for both files, but I still cannot get the loaded model to give the same accuracy.
    I should also mention that the dataset contains the same features.

    Any help would be much appreciated as I have been going round in circles having a look at:

    If possible, could you post an example where you save a model and load it in different sessions?
    Thank you again

    • Jason Brownlee January 4, 2018 at 8:15 am #

      Interesting, I have not had this problem myself.

      Some ideas to explore:

      – Are you able to replicate the same fault on a different machine? e.g. on AWS?
      – Are all of your libraries up to date.
      – Are you saving weights and topology to a single file or separate files?
      – Are you 100% sure the data used to evaluate the model before/after saving is identical?

  41. Tarun Madan January 18, 2018 at 11:52 pm #

    Hey Jason, the tutorial is very helpful.

    However, there is one question that I have. I trained a LSTM model for sequence classification problem and observed the following.

    Within the same python session as the one where model is trained, I get the exact results (loss, accuracy, predicted probabilities) from the loaded_model (using json format). But, in a new python session the results are not exactly the same but are very close.

    Can you please help me understand what could be the possible reason for slightly different results? Is there any other random_seed that needs to be fixed for exact match of the results?

    Looking forward to your response.


    • Jason Brownlee January 19, 2018 at 6:31 am #

      There must be some randomness related to the internal state.

      I don’t know for sure. Interesting finding!

  42. srishti February 7, 2018 at 2:35 am #

    I need to deploy my LSTM model as an API, how should I go about it?

    Thank you so much

  43. vinay February 9, 2018 at 5:45 pm #

    I have seen a basic example in

    How to save model in ini or cfg format instead of json

    • Jason Brownlee February 10, 2018 at 8:52 am #

      Sorry, other formats are not supported. You may have to write your own code.

  44. Gaurav February 12, 2018 at 5:00 pm #

    Hi Jason,

    I trained the model in keras and got validation accuracy around 50 %, but when i save the model and reload it again as per the code you mentioned, the validation accuracy is just 5%……it seems the loaded model behaves like untrained model

  45. Ayesha February 14, 2018 at 4:26 pm #

    Hi Jason
    is it possible that after saving existing model, i want to retrain it on new data. Let’s say if my existing model was trained on 100 dataset, but after sometime i want to retrain it on some new 50 dataset so that it could learn some new dataset as well to make good prediction. It means it would have now impact of 150 dataset.

    Need your help.

    Thanks in advance.

  46. simon February 21, 2018 at 6:39 pm #


    Is there any way to convert a h5 and a json files into one hdf5 file?
    I have many pairs of h5 and json files but when I need to convert keras models to tensorflow pb files, hdf5 files are needed.


Leave a Reply