Save and Load Machine Learning Models in Python with scikit-learn

Finding an accurate machine learning model is not the end of the project.

In this post you will discover how to save and load your machine learning model in Python using scikit-learn.

This allows you to save your model to file and load it later in order to make predictions.

Let’s get started.

  • Update Jan/2017: Updated to reflect changes to the scikit-learn API in version 0.18.
  • Update March/2018: Added alternate link to download the dataset as the original appears to have been taken down.
Save and Load Machine Learning Models in Python with scikit-learn

Save and Load Machine Learning Models in Python with scikit-learn
Photo by Christine, some rights reserved.

Need help with Machine Learning in Python?

Take my free 2-week email course and discover data prep, algorithms and more (with code).

Click to sign-up now and also get a free PDF Ebook version of the course.

Start Your FREE Mini-Course Now!

Finalize Your Model with pickle

Pickle is the standard way of serializing objects in Python.

You can use the pickle operation to serialize your machine learning algorithms and save the serialized format to a file.

Later you can load this file to deserialize your model and use it to make new predictions.

The example below demonstrates how you can train a logistic regression model on the Pima Indians onset of diabetes dataset, save the model to file and load it to make predictions on the unseen test set (update: download from here).

Running the example saves the model to finalized_model.sav in your local working directory. Load the saved model and evaluating it provides an estimate of accuracy of the model on unseen data.

Finalize Your Model with joblib

Joblib is part of the SciPy ecosystem and provides utilities for pipelining Python jobs.

It provides utilities for saving and loading Python objects that make use of NumPy data structures, efficiently.

This can be useful for some machine learning algorithms that require a lot of parameters or store the entire dataset (like K-Nearest Neighbors).

The example below demonstrates how you can train a logistic regression model on the Pima Indians onset of diabetes dataset, saves the model to file using joblib and load it to make predictions on the unseen test set.

Running the example saves the model to file as finalized_model.sav and also creates one file for each NumPy array in the model (four additional files). After the model is loaded an estimate of the accuracy of the model on unseen data is reported.

Tips for Finalizing Your Model

This section lists some important considerations when finalizing your machine learning models.

  • Python Version. Take note of the python version. You almost certainly require the same major (and maybe minor) version of Python used to serialize the model when you later load it and deserialize it.
  • Library Versions. The version of all major libraries used in your machine learning project almost certainly need to be the same when deserializing a saved model. This is not limited to the version of NumPy and the version of scikit-learn.
  • Manual Serialization. You might like to manually output the parameters of your learned model so that you can use them directly in scikit-learn or another platform in the future. Often the algorithms used by machine learning algorithms to make predictions are a lot simpler than those used to learn the parameters can may be easy to implement in custom code that you have control over.

Take note of the version so that you can re-create the environment if for some reason you cannot reload your model on another machine or another platform at a later time.

Summary

In this post you discovered how to persist your machine learning algorithms in Python with scikit-learn.

You learned two techniques that you can use:

  • The pickle API for serializing standard Python objects.
  • The joblib API for efficiently serializing Python objects with NumPy arrays.

Do you have any questions about saving and loading your machine learning algorithms or about this post? Ask your questions in the comments and I will do my best to answer them.

Frustrated With Python Machine Learning?

Master Machine Learning With Python

Develop Your Own Models in Minutes

…with just a few lines of scikit-learn code

Discover how in my new Ebook:
Machine Learning Mastery With Python

Covers self-study tutorials and end-to-end projects like:
Loading data, visualization, modeling, tuning, and much more…

Finally Bring Machine Learning To
Your Own Projects

Skip the Academics. Just Results.

Click to learn more.

134 Responses to Save and Load Machine Learning Models in Python with scikit-learn

  1. Kayode October 18, 2016 at 6:15 pm #

    Thank you so much for this educative post.

  2. TonyD November 13, 2016 at 3:52 pm #

    Hi Jason,

    I have two of your books and they are awesome. I took several machine learning courses before, however as you mentioned they are more geared towards theory than practicing. I devoured your Machine Learnign with Python book and 20x my skills compared to the courses I took.

    I found this page by Googling a code snippet in chapter 17 in your book. The line:

    loaded_model = pickle.load(open(filename, ‘rb’))

    throws the error:

    runfile(‘C:/Users/Tony/Documents/MassData_Regression_Pickle.py’, wdir=’C:/Users/Tony/Documents’)
    File “C:/Users/Tony/Documents/MassData_Regression_Pickle.py”, line 55
    loaded_model = pickle.load(open(filename, ‘rb’))
    ^
    SyntaxError: invalid syntax

    • Jason Brownlee November 14, 2016 at 7:36 am #

      Thanks TonyD.

      I wonder if there is a copy-paste error, like an extra space or something?

      Does the code example (.py file) provided with the book for that chapter work for you?

  3. Konstantin November 19, 2016 at 6:01 am #

    Hello, Jason

    Where we can get X_test, Y_test “sometime later”? It is “garbag collected”!
    X_test, Y_test not pickled In your example you pickle classifier only but you keep refer to x and y. Real applications is not single flow I found work around and get Y from clf.classes_ object.

    What is correct solution? Should we pickle decorator class with X and Y or use pickled classifier to pull Ys values? I didn’t find legal information from documentation on KNeighborclassifier(my example) as well; how to pull Y values from classifier.

    Can you advise?

    • Jason Brownlee November 19, 2016 at 8:51 am #

      Hi Konstantin,

      I would not suggest saving the data. The idea is to show how to load the model and use it on new data – I use existing data just for demonstration purposes.

      You can load new data from file in the future when you load your model and use that new data to make a prediction.

      If you have the expected values also (y), you can compare the predictions to the expected values and see how well the model performed.

      • Guangping Zhang November 21, 2016 at 6:01 am #

        I’m newer Pythoner, your code works perfect! But where is the saved file? I used windows 10.

        • Jason Brownlee November 22, 2016 at 6:56 am #

          Thanks Guangping.

          The save file is in your current working directory, when running from the commandline.

          If you’re using a notebook or IDE, I don’t know where the file is placed.

  4. Mohammed Alnemari December 13, 2016 at 2:45 pm #

    Hi Jason ,
    I am just wondering if can we use Yaml or Json with sklearn library . I tried to do it many times but I could not reach to an answer . I tried to do it as your lesson of Kares , but for some reason is not working . hopefully you can help me if it is possible

    • Jason Brownlee December 14, 2016 at 8:24 am #

      Hi Mohammed, I believe the serialization of models to yaml and json is specific to the Keras library.

      sklearn serialization is focused on binary files like pickle.

  5. Normando Zubia December 29, 2016 at 9:55 am #

    Hi, my name is Normando Zubia and I have been reading a lot of your material for my school lessons.

    I’m currently working on a model to predict user behavoir in a production environment. Due to several situations I can not save the model in a pickle file. Do you know any way to save the model in a json file?

    I have been playing a little with sklearn classes and I noticed that if I save some parameters for example: n_values_, feature_indices_ and active_features_ in a OneHotEncoding model I can reproduce the results. Could this be done with a pipeline? Or do you think I need to save each model’s parameters to load each model?

    PS: Sorry for my bad english and thanks for your attention.

    • Jason Brownlee December 30, 2016 at 5:49 am #

      Hi Normando,

      If you are using a simple model, you could save the coefficients directly to file. You can then try and put them back in a new model later or implement the prediction part of the algorithm yourself (very easy for most methods).

      Let me know how you go.

  6. Samuel February 6, 2017 at 3:14 pm #

    Hello Jason,

    I am new to machine learning. I am your big fan and read a lot of your blog and books. Thank you very much for teaching us machine learning.

    I tried to pickle my model but fail. My model is using VGG16 and replace the top layer for my classification solution. I further narrowed down the problem and find that it is the VGG16 model failed to pickle. Please find my simplified code below and error log below:

    It will be highly appreciated if you can give me some direction on how to fix this error.

    Thank you very much
    ———————————————————-
    # Save Model Using Pickle
    from keras.applications.vgg16 import VGG16
    import pickle

    model = VGG16(weights=’imagenet’, include_top=False)

    filename = ‘finalized_model.sav’
    pickle.dump(model, open(filename, ‘wb’))

    —————————————————-
    /Library/Frameworks/Python.framework/Versions/2.7/bin/python2.7 /Users/samueltin/Projects/bitbucket/share-card-ml/pickle_test.py
    Using TensorFlow backend.
    Traceback (most recent call last):
    File “/Users/samueltin/Projects/bitbucket/share-card-ml/pickle_test.py”, line 8, in
    pickle.dump(model, open(filename, ‘wb’))
    File “/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pickle.py”, line 1376, in dump
    Pickler(file, protocol).dump(obj)
    File “/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pickle.py”, line 224, in dump
    self.save(obj)
    File “/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pickle.py”, line 331, in save
    self.save_reduce(obj=obj, *rv)
    File “/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pickle.py”, line 425, in save_reduce
    save(state)
    File “/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pickle.py”, line 286, in save
    f(self, obj) # Call unbound method with explicit self
    File “/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pickle.py”, line 655, in save_dict
    self._batch_setitems(obj.iteritems())
    File “/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pickle.py”, line 669, in _batch_setitems
    save(v)
    File “/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pickle.py”, line 286, in save
    f(self, obj) # Call unbound method with explicit self
    File “/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pickle.py”, line 606, in save_list
    self._batch_appends(iter(obj))
    File “/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pickle.py”, line 621, in _batch_appends
    save(x)
    File “/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pickle.py”, line 331, in save
    self.save_reduce(obj=obj, *rv)
    File “/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pickle.py”, line 425, in save_reduce
    save(state)
    File “/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pickle.py”, line 286, in save
    f(self, obj) # Call unbound method with explicit self
    File “/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pickle.py”, line 655, in save_dict
    self._batch_setitems(obj.iteritems())
    File “/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pickle.py”, line 669, in _batch_setitems
    save(v)
    File “/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pickle.py”, line 286, in save
    f(self, obj) # Call unbound method with explicit self
    File “/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pickle.py”, line 606, in save_list
    self._batch_appends(iter(obj))
    File “/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pickle.py”, line 621, in _batch_appends
    save(x)
    File “/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pickle.py”, line 331, in save
    self.save_reduce(obj=obj, *rv)
    File “/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pickle.py”, line 425, in save_reduce
    save(state)
    File “/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pickle.py”, line 286, in save
    f(self, obj) # Call unbound method with explicit self
    File “/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pickle.py”, line 655, in save_dict
    self._batch_setitems(obj.iteritems())
    File “/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pickle.py”, line 669, in _batch_setitems
    save(v)
    File “/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pickle.py”, line 331, in save
    self.save_reduce(obj=obj, *rv)
    File “/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pickle.py”, line 425, in save_reduce
    save(state)
    File “/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pickle.py”, line 286, in save
    f(self, obj) # Call unbound method with explicit self
    File “/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pickle.py”, line 655, in save_dict
    self._batch_setitems(obj.iteritems())
    File “/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pickle.py”, line 669, in _batch_setitems
    save(v)
    File “/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pickle.py”, line 331, in save
    self.save_reduce(obj=obj, *rv)
    File “/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pickle.py”, line 425, in save_reduce
    save(state)
    File “/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pickle.py”, line 286, in save
    f(self, obj) # Call unbound method with explicit self
    File “/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pickle.py”, line 655, in save_dict
    self._batch_setitems(obj.iteritems())
    File “/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pickle.py”, line 669, in _batch_setitems
    save(v)
    File “/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pickle.py”, line 286, in save
    f(self, obj) # Call unbound method with explicit self
    File “/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pickle.py”, line 655, in save_dict
    self._batch_setitems(obj.iteritems())
    File “/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pickle.py”, line 669, in _batch_setitems
    save(v)
    File “/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pickle.py”, line 286, in save
    f(self, obj) # Call unbound method with explicit self
    File “/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pickle.py”, line 606, in save_list
    self._batch_appends(iter(obj))
    File “/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pickle.py”, line 621, in _batch_appends
    save(x)
    File “/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pickle.py”, line 331, in save
    self.save_reduce(obj=obj, *rv)
    File “/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pickle.py”, line 425, in save_reduce
    save(state)
    File “/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pickle.py”, line 286, in save
    f(self, obj) # Call unbound method with explicit self
    File “/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pickle.py”, line 655, in save_dict
    self._batch_setitems(obj.iteritems())
    File “/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pickle.py”, line 669, in _batch_setitems
    save(v)
    File “/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pickle.py”, line 331, in save
    self.save_reduce(obj=obj, *rv)
    File “/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pickle.py”, line 425, in save_reduce
    save(state)
    File “/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pickle.py”, line 286, in save
    f(self, obj) # Call unbound method with explicit self
    File “/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pickle.py”, line 655, in save_dict
    self._batch_setitems(obj.iteritems())
    File “/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pickle.py”, line 669, in _batch_setitems
    save(v)
    File “/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pickle.py”, line 331, in save
    self.save_reduce(obj=obj, *rv)
    File “/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pickle.py”, line 425, in save_reduce
    save(state)
    File “/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pickle.py”, line 286, in save
    f(self, obj) # Call unbound method with explicit self
    File “/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pickle.py”, line 655, in save_dict
    self._batch_setitems(obj.iteritems())
    File “/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pickle.py”, line 669, in _batch_setitems
    save(v)
    File “/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pickle.py”, line 286, in save
    f(self, obj) # Call unbound method with explicit self
    File “/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pickle.py”, line 606, in save_list
    self._batch_appends(iter(obj))
    File “/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pickle.py”, line 621, in _batch_appends
    save(x)
    File “/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pickle.py”, line 286, in save
    f(self, obj) # Call unbound method with explicit self
    File “/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pickle.py”, line 568, in save_tuple
    save(element)
    File “/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pickle.py”, line 286, in save
    f(self, obj) # Call unbound method with explicit self
    File “/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pickle.py”, line 655, in save_dict
    self._batch_setitems(obj.iteritems())
    File “/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pickle.py”, line 669, in _batch_setitems
    save(v)
    File “/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pickle.py”, line 306, in save
    rv = reduce(self.proto)
    File “/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/copy_reg.py”, line 70, in _reduce_ex
    raise TypeError, “can’t pickle %s objects” % base.__name__
    TypeError: can’t pickle module objects

    Process finished with exit code 1

    • Jason Brownlee February 7, 2017 at 10:11 am #

      Sorry Samuel, I have not tried to save a pre-trained model before. I don’t have good advice for you.

      Let me know how you go.

      • huikang September 21, 2018 at 11:50 am #

        Is there a more efficient method in machine learning than joblib.load(), storing the model directly in memory and using it again?

        • Jason Brownlee September 21, 2018 at 2:21 pm #

          Sure, you can make an in-memory copy. I think sklearn has a clone() function that you can use.

  7. Amy March 8, 2017 at 7:03 am #

    I have trained a model using liblinearutils. The model could not be saved using pickle as it gives error that ctype module with pointers cannot be pickled. How can I save my model?

    • Jason Brownlee March 8, 2017 at 9:47 am #

      Sorry Amy, I don’t have any specific examples to help.

      Perhaps you can save the coefficients of your model to file?

  8. SHUBHAM BHARDWAJ April 3, 2017 at 10:42 pm #

    Thanks a lot, very useful

  9. Benju April 11, 2017 at 1:35 am #

    My saved modells are 500MB+ Big….is that normal?

    • Jason Brownlee April 11, 2017 at 9:34 am #

      Ouch, that does sound big.

      If your model is large (lots of layers and neurons) then this may make sense.

  10. Anupam April 13, 2017 at 2:32 am #

    How to use model file (“finalized_model.sav”) to test unknown data. Like, if the model is for tagger , how this model will tag the text file data? Is there any example?

  11. Oss Mps April 21, 2017 at 3:09 pm #

    Dear Sir, please advice on how to extract weights from pickle dump? Thank you

    • Jason Brownlee April 22, 2017 at 9:23 am #

      I would suggest extracting coefficients from your model directly and saving them in your preferred format.

  12. Suhas May 24, 2017 at 4:44 am #

    Hi I love your website; it’s very useful!

    Are there any examples showing how to save out the training of a model after say 100 epochs/iterations? It’s not immediately clear from looking at joblib or scikit learn.

    This is esp. useful when dealing with large datasets and/or computers or clusters which may be unreliable (e.g., subject to system reboots, etc.)

    • Jason Brownlee May 24, 2017 at 4:59 am #

      I’m not sure how to do this with sklearn. You may need to write something custom. Consider posting to stackoverflow.

  13. Viktor May 30, 2017 at 8:52 am #

    Hey!
    Is it possible to open my saved model and make a prediction on cloud server where is no sklearn installed?

    • Jason Brownlee June 2, 2017 at 12:31 pm #

      no.

      You could save the coefficients from within the model instead and write your own custom prediction code.

  14. Clemence June 8, 2017 at 6:55 pm #

    Hello Jason and thank you very much, it’s been very helpful.

    Do you know if it’s possible to load features transformation with the ML model?
    I’m mostly thinking of categorical variables that we need to encode into numerical ones.

    I’m using sklearn to do that, but I don’t know if we can (as for Spark), integrate this transformation with the ML model into the serialized file (Pickle or Joblib).

    #Encode categorical variable into numerical ones
    from sklearn.preprocessing import LabelEncoder
    list_var = [‘country’, ‘city’]

    encoder = LabelEncoder()
    for i in list_var:
    df[i] = encoder.fit_transform(df[i])

    Then I fit the model on the training dataset…

    And I need to save this transformation with the model. Do you know if that’s possible ?
    Thank you!

    • Jason Brownlee June 9, 2017 at 6:23 am #

      I’m not sure I follow sorry.

      You can transform your data for your model, and you can apply this same transform in the future when you load your model.

      You can save the transform objects using pickle. Is that what you mean?

  15. Bhavani Shanker June 22, 2017 at 1:24 am #

    Hi Jason,
    Kindly accept my encomiums for the illustrative lecture that you have delivered on Machine Learning using Python.

    **********************************************
    # save the model to disk
    filename = ‘finalized_model.sav’
    joblib.dump(model, filename)

    # sometime later…

    # load the model from disk
    loaded_model = joblib.load(filename)
    result = loaded_model.score(X_test, Y_test)
    print(result)
    *******************************************************

    After saving the model ‘finalized_model.sav’ , How can recall the saved model in the new session at later date?

    I would appreciate if you can advice on this

    • Jason Brownlee June 22, 2017 at 6:11 am #

      The code after “sometime later” would be in a new session.

  16. jinsh June 28, 2017 at 8:57 pm #

    Hello sir,

    The above code saves the model and later we can check the accuracy also
    but what i have to do for making predicting the class of unknown data?
    I mean which function have to be called ?

    eg: 2,132,40,35,168,43.1,2.288,33

    can you suggest how to get the class of above data through prediction ?

    thank you

    • Jason Brownlee June 29, 2017 at 6:35 am #

      Pass in input data to the predict function and use the result.

  17. Ukesh Chawal July 24, 2017 at 11:09 pm #

    Can we use “pickling” to save an LSTM model and to load or used a hard-coded pre-fit model to generate forecasts based on data passed in to initialize the model?

    When I tried to use it, it gave me following error:

    PicklingError: Can’t pickle : attribute lookup module on builtins failed

  18. akatsuki August 9, 2017 at 1:21 pm #

    tbh this is best of the sites on web. Great!
    I love the email subscriptions of yours as a beginner they are quite helpful to me .

  19. vikash August 10, 2017 at 9:32 pm #

    Hi @Jason Brownlee thanks for such informative blog. Can you please guide me for a problem where i would like to retrain the .pkl model only with new dataset with new class keeping the previous learning intact. I had thought that model.fit(dataset,label) will do that but it forgets the previous learning. Please suggest me some techniques for it.
    Thanks

    • Jason Brownlee August 11, 2017 at 6:42 am #

      Sorry, I don’t follow. Can you please restate your question?

      • sassashi August 28, 2017 at 4:41 am #

        Hi Jason, I believe @vikash is looking for a way to continuously train the model with new examples after the initial training stage. This is something I am searching for as well. I know it is possible to retrain a model in tensorflow with new examples but I am not sure if it’s possible with sklearn.

        to expand the question some more: 1-you train a model with sklearn 2-save it with pickle or joblib
        3-then you get your hands on some new examples that were not available at the time of initial training “step 1” 4-you load the previous model 5-and now you try to train the model again using the new data without losing the previous knowledge… is step 5 possible with sklearn?

  20. Navdeep Singh August 22, 2017 at 8:30 pm #

    Hi Json,

    I need your guidance on Updation of saved pickle files with new data coming in for training

    I recall 3 methods, Online Learning which is train one every new observation coming in and in this case model would always be biased towards new features ,which i dont wana do

    Second is, Whenever some set of n observations comes, embedd it with previous data and do retraining again from scratch, that i dont want to do as in live environment it will take lot of time

    Third is Mini batch learning, i know some of algorithm like SGD and other use partial fit method and do same but I have other algorithms as week like random forest , decision tress, logistic regression. I wana ask can i update the previously trained pickle with new training ?

    I am doing it in text classification, I read that possibly doing this, model update pickle will not take new features of new data ( made using tfidf or countvectorizer) and it would be of less help.

    Also as domain is same, and If client(Project we are working for) is different , inspite of sharing old data with new client (new project), could i use old client trained model pickle and update it with training in new client data. Basically I am transferring learning

    • Jason Brownlee August 23, 2017 at 6:48 am #

      Great question.

      This is a challenging problem to solve. Really, the solution must be specific to your project requirements.

      A flexible approach may be to build-in capacity into your encodings to allow for new words in the future.

      The simplest approach is to ignore new words.

      These, and other strategies are testable. See how performance degrades under both schemes with out-of-band test data.

  21. Merari September 11, 2017 at 7:59 am #

    Gracias por compartir,
    Existe alguna forma en la que pueda realizar predicciones con nuevos datos solo con el modelo guardado? llamando este modelo desde un archivo nuevo? lo he intentado con la instruccion final:

    # load the model from disk
    loaded_model = pickle.load(open(filename, ‘rb’))
    result = loaded_model.score(X_test, Y_test)
    print(result)

    pero no lo he logrado

    373/5000
    Thanks for sharing,
    Is there any way I can make predictions with new data only with the saved model? calling this model from a new file? I have tried with the final instruction:

    # load the model from disk
    loaded_model = pickle.load (open (filename, ‘rb’))
    result = loaded_model.score (X_test, Y_test)
    print (result)

    but I have not achieved it

    • Jason Brownlee September 11, 2017 at 12:11 pm #

      That is exactly what we do in this tutorial.

      What is the problem exactly?

  22. AP September 29, 2017 at 6:36 am #

    Hi Jason, I learn a lot reading your python books and blogs. Thank you for everything.

    I’m having an issue when I work on text data with loaded model in a different session. I fit and transform training data with countvectorizer and tfidf. Then I only transform the test data with the fitted instances as usual. But, when work on loaded pretrained model in a different session, I am having problem in feature extraction. I can’t just transform the test data as it asks for fitted instance which is not present in the current session. If I fit and transform on test data only, model prediction performance drastically decreases. I believe that is wrong way of doing machine learning. So, how can I do the feature extraction using countvectorizer, tfidf or other cases while working with previously trained model?

    I’m using spark ML but I think it would be the same for scikit-learn as well.

    • Jason Brownlee September 30, 2017 at 7:31 am #

      Perhaps you can pickle your data transform objects as well, and re-use them in the second session?

  23. Bhavya Chugh October 29, 2017 at 5:57 am #

    Hi Jason,

    I trained a random forest model and saved the same as a pickle file in my local desktop. I then copied that pickle file to my remote and tested the model with the same file and it is giving incorrect predictions. I am using python 3.6 in my local and python 3.4 in my remote, however the version of scikit-learn are same. Any ideas why this may be happening?

    • Jason Brownlee October 29, 2017 at 6:00 am #

      No idea, perhaps see if the experiment can be replicated on the same machine? or different machines with the same version of Python?

  24. Berkin Albert Antony November 10, 2017 at 5:45 pm #

    Hi Jason Brownlee,

    I have a LogisticRegression model for binary classification. I wish to find a similar data points in a trained model for a given test data points. So that I can show these are similar data points predicted with these same class.

    Could you please suggest your thoughts for the same. I am using scikit learn logistic regression

    Thanks

    • Jason Brownlee November 11, 2017 at 9:18 am #

      Perhaps you could find data points with a low Euclidean distance from each other?

  25. James November 16, 2017 at 8:47 am #

    Hi Jason –

    If you pickle a model trained on a subset of features, is it possible to view these features after loading the pickled model in a different file? For example: original df has features a,b,c,d,e,f. You train the model on a,c,e. Is it possible to load the pickled model in a separate script and see the model was trained on a,c,e?

    Thanks,
    James

    • Jason Brownlee November 16, 2017 at 10:33 am #

      Yes, you can save your model, load your model, then use it to make predictions on new data.

  26. Mrinal Mitra November 22, 2017 at 6:26 am #

    Hi Jason,

    Thanks for explaining it so nicely. I am new to this and will be needing your guidance. I have data using which I have trained the model. Now I want this model to predict an untested data set. However, my requirement is an output which will have the data and corresponding prediction by the model. For example, record 1 – type a, record 2 – type a, record 3 – type c and so on. Could you please guide me on this?

    • Jason Brownlee November 22, 2017 at 11:16 am #

      You can provide predictions one at a time or in a group to the model and the predictions will be in the same order as the inputs.

      Does that help?

  27. Niranjan December 3, 2017 at 3:22 pm #

    Hi,

    I am using chunks functionality in the read csv method in pandas and trying to build the model iteratively and save it. But it always saves the model that is being built in the last chunk and not the entire model. Can you help me with it

    clf_SGD = SGDClassifier(loss=’modified_huber’, penalty=’l2′, alpha=1e-3, max_iter=500, random_state=42)
    pd.read_csv(“file_name”,chunksize = 1000):
    “””
    data preparation and cleaning
    “””
    hashing = hv.fit_transform(X_train[‘description’])
    clf_SGD.partial_fit(hashing, y_train, classes= y_classes)

    joblib.dump(clf_SGD, source_folder + os.path.sep+’text_clf_sgd.pkl’)

    • Jason Brownlee December 4, 2017 at 7:46 am #

      Sorry, I’m not sure I follow, could you please try reframing your question?

  28. Shabbir December 8, 2017 at 8:50 am #

    Hi Jason,
    This is extremely helpful and saved me quite a bit of processing time.

    I was training a Random Forest Classifier on a 250MB data which took 40 min to train everytime but results were accurate as required. The joblib method created a 4GB model file but the time was cut down to 7 Minutes to load. That was helpful but the results got inaccurate or atleast varied quite a bit from the original results. I use average of 2 Decision Tree and 1 Random Forest for the model. Decision Tree Models have kept there consistency loading vs training but RF hasn’t. Any ideas?

  29. Nilanka December 19, 2017 at 9:10 pm #

    Thank you very useful!!

  30. Gokhan December 28, 2017 at 2:55 pm #

    Hello, if i load model
    loaded_model = joblib.load(filename)
    result = loaded_model.score(X_test, Y_test)
    print(result)

    can i use this model for another testsets to prediction?

  31. Vinay Boddula January 20, 2018 at 5:31 am #

    Hi Jason,

    How do I generated new X_Test for prediction ? This new X_Test needs to make sure that the passed parameters are same in the model was trained with.

    Background: I am basically saving the model and predicting with new values from time to time. How do we check whether the new values have all the parameters and correct data type.

    • Jason Brownlee January 20, 2018 at 8:25 am #

      Visualization and statistics.

      I have many posts on the topic, try the search box.

  32. Sekar February 1, 2018 at 4:06 am #

    Jason. Very good article. As asked by others, in my case I am using DecisionTreeClassifier with text feature to int transformation. Eventhough, you mentioned that transformation map can also be picked and read back, is there any example available? Will it be stored in the same file or it will be another file?

  33. Yousif February 5, 2018 at 8:01 pm #

    Thank you so much professor
    we get more new knowledge

  34. Adarsh C February 8, 2018 at 12:29 pm #

    HI sir,
    I would like to save predicted output as a CSV file. After doing ML variable I would like to save “y_predicted”. And I’m using python ide 3.5.x I have pandas,sklearn,tensorflow libraries

  35. Atul March 11, 2018 at 6:45 am #

    Hi Jason,

    I would like to save predicted output as a CSV file. After doing ML variable I would like to save “y_predicted”. How I can save Naive Bayes, SVM, RF and DT Classification for final predictions for all samples saved as a .csv with three columns namely Sample, Actual value, Prediction
    values

  36. Tommy March 22, 2018 at 11:14 pm #

    I have a list of regression coefficients from a paper. Is there a way to load these coefficients into the sklearn logistic regression function to try and reproduce their model?
    Thanks!
    Tommy

    • Jason Brownlee March 23, 2018 at 6:07 am #

      No model is needed, use each coefficient to weight the inputs on the data, the weighted sum is the prediction.

  37. Vincent April 10, 2018 at 10:25 am #

    Hi,all
    I am using scikit 0.19.1
    I generated a training model using random forest and saved the model. These were done on ubuntu 16.01 x86_64.
    I copied the model to a windows 10 64 bit machine and wanted to reuse the saved model. But unfortunately i get the following
    Traceback (most recent call last):
    File “C:\Users\PC\Documents\Vincent\nicholas\feverwizard.py.py”, line 19, in
    rfmodel=joblib.load(modelfile)
    File “C:\Python27\lib\site-packages\sklearn\externals\joblib\numpy_pickle.py”, line 578, in load
    obj = _unpickle(fobj, filename, mmap_mode)
    File “C:\Python27\lib\site-packages\sklearn\externals\joblib\numpy_pickle.py”, line 508, in _unpickle
    obj = unpickler.load()
    File “C:\Python27\lib\pickle.py”, line 864, in load
    dispatchkey
    File “C:\Python27\lib\pickle.py”, line 1139, in load_reduce
    value = func(*args)
    File “sklearn\tree_tree.pyx”, line 601, in sklearn.tree._tree.Tree.cinit
    ValueError: Buffer dtype mismatch, expected ‘SIZE_t’ but got ‘long long’

    What could be happening? Is it because of a switch from ubuntu to windows? However i am able to reuse the model in my ubuntu.

    • Jason Brownlee April 11, 2018 at 6:29 am #

      Perhaps the pickle file is not portable across platforms?

  38. Pramod April 17, 2018 at 9:03 pm #

    Can we load model trained on 64 bit system on 32 bit operating system..?

    • Jason Brownlee April 18, 2018 at 8:04 am #

      I’m skeptical that it would work. Try it and see. Let me know how you go.

  39. Arnaud April 17, 2018 at 9:29 pm #

    Dear Jason :

    Thank you for ‘le cours’ which is very comprehensive.

    I have a maybe tricky but ‘could be very usefull’ question about my newly created standard Python object.

    Is it possible to integrate a call to my Python object in a Fortran program ?

    Basically I have a deterministic model in which I would like to make recursive calls to my Python object at every time step.

    Do I need some specific libraries ?

    Thank you
    Best regards

    • Jason Brownlee April 18, 2018 at 8:06 am #

      You’re welcome.

      I suspect it is possible. It’s all just code at the end of the day. You might need some kind of Python-FORTRAN bridge software. I have not done this, sorry.

  40. Pratip April 23, 2018 at 4:32 pm #

    Hi Sir ,
    I wanted to know if its possible to combine the scikit preloaded datasets with some new datasets to get more training data to get further higher accuracy or firstly run on the scikit loaded dataset and then save model using pickle an run it on another dataset .
    Which method will be correct ?
    Please help .

    • Jason Brownlee April 24, 2018 at 6:20 am #

      Sure, you can, but it may only make sense if the data was collected in the same way from the same domain.

  41. Ishit Gandhi May 4, 2018 at 6:00 pm #

    Hii Jason,

    Can you put example of how to store and load Pipeline models?

    eg.

    clf = Pipeline([(“rbm”,rbm),(“logistic”,logistic)])
    clf.fit(trainX,trainY)

  42. Akash May 14, 2018 at 4:15 pm #

    Hi jason,
    My name is Akash Joshi.I am trying to train my scikit svm model with 101000 images but I run out of memory.Is there a way where I can train the svm model in small batches?Can we use pickle?

    • Jason Brownlee May 15, 2018 at 7:51 am #

      Perhaps try running on a machine with more RAM, such as an EC2 instance?

      Perhaps try using a sample of your dataset instead?

      Perhaps use a generator to progressively load the data?

  43. Samarth May 14, 2018 at 4:54 pm #

    Hi Jason

    I want to know how can presist a minmax transjformation? There are ways to persist the final model but to persist the transformations?

    Thanks

    • Jason Brownlee May 15, 2018 at 7:51 am #

      Save the min and max values for each variable.

      Or save the whole object.

  44. SOORAJ T S May 16, 2018 at 12:30 am #

    thank you the post, it is very informative but i have a doubt about the labels or names of the dataset can specify each.

  45. SOORAJ T S May 16, 2018 at 4:11 pm #

    names = [‘preg’, ‘plas’, ‘pres’, ‘skin’, ‘test’, ‘mass’, ‘pedi’, ‘age’, ‘class’]

    in the above code what are these “preg” , “plas”, “pres” etc…

  46. Aniko June 7, 2018 at 12:13 am #

    HI Jason!

    I created a machine learning (GBM) model to predict house prices and a Django application to usability. This model has more than 1000 n_estimators and it takes more than 1 minutes to load before getting the prediction in every request.
    I would like to load joblib dump file just once and store the model in memory, avoiding loading the model in every get requests.

    What is your best practice for this?

    Thanks

    • Jason Brownlee June 7, 2018 at 6:31 am #

      This sounds like a web application software engineering question rather than a machine learning question.

      Perhaps you can host the model behind a web service?

      • Aniko June 7, 2018 at 6:51 pm #

        thank you, meanwhile I found some caches -related solution in Django documentation, this perhaps solve the loading problem

  47. LamaOS223 June 9, 2018 at 2:00 pm #

    okay what if i had 2 datasets for Example a Loan datasets
    the first dataset has a Loan_Status attribute
    and the second one does not have a Loan_Status attribute
    if i trained the model on the first dataset and i want to predict the Loan_Status for the second dataset, how to do that? please make it simple for me i’m beginner

  48. Imti July 12, 2018 at 4:55 pm #

    Hey Jason, I am working on a model to classify text files. I am using the CountVectorizer, TfidfTransformer and SGDClassifier in the same sequence on a set of data files. I am saving the SGDClassifier object via the joblib.dump method you have mentioned in this article.

    Do I also need to save the vectorizer and transformer objects/models ? Since when i take a new file for classification I will need to go through these steps again.

    • Jason Brownlee July 13, 2018 at 7:33 am #

      Yes, they are needed to prepare any data prior to using the model.

  49. Dennis Faucher July 28, 2018 at 2:38 am #

    Just what I needed today. Thank syou.

  50. Tejaswini July 30, 2018 at 9:01 am #

    Hi Jason,
    Appreciate for the article. when i am saving the model and loading it in different page.Then it is showing different accuracy.

    Problem trying to solve: I am using oneclasssvm model and detecting outliers in sentences.

    • Jason Brownlee July 30, 2018 at 2:15 pm #

      I have not seen that, are you sure you are evaluating the model on exactly the same data?

  51. Tejaswini August 2, 2018 at 2:10 pm #

    Yes Jason i am using gensim word2vec to convert text into feature vectors and then performing classification task.after saving model and reloading in another session its giving different results.

    • Jason Brownlee August 2, 2018 at 2:11 pm #

      That is odd. I have not seen this.

      Perhaps report a fault/bug?

  52. EvapStudent August 7, 2018 at 1:36 am #

    Hi Jason,

    I am training a neural network using MLPRegressor, trying to predict pressure drop in different geometries of heat exchangers. I think I have gotten the network to train well with low MRE, but I can’t figure out how to use the network. When I tried to load using pickle and call again, I am getting an error when using “score”. I am new to python so not sure how to go about bringing in new data for the network to predict or how to generalize doing so.

    • Jason Brownlee August 7, 2018 at 6:28 am #

      I don’t recommend using pickle. I recommend using the Keras API to save/load your model.

      Once you find a config that works for your problem, perhaps switch from the sklearn wrappers to the Keras API directly.

      • EvapStudent August 7, 2018 at 11:13 pm #

        Hi Jason,

        Thanks for the recommendation. Is there no easy way to save a model and call from it to use in scikit learn? I have been getting good results with the model I have made on there, I just don’t know how to get it to the point where I can actually use the network (i.e. put in a geometry and get it’s predictions).

        If using Keras API to save/load is the best option, how do I go about doing that?

        • Jason Brownlee August 8, 2018 at 6:21 am #

          There may be, but I don’t have an example, sorry.

  53. Golnoush August 21, 2018 at 1:38 am #

    Hello Jason,

    Thank you for your nice tutorial! Does pickle.dump(model, open(filename, ‘wb’)) only save the neural network model or it also save the parameters and weights of the model?
    Does the back propagation and training is done again when we use pickle.load ?
    What I would like to do is that I aim to save the whole model and weights and parameters during training and use the same trained model for every testing data I have. I would be so thankful if you could assist me in this way.

    • Jason Brownlee August 21, 2018 at 6:19 am #

      I believe you cannot use pickle for neural network models – e.g. Keras models.

  54. Somo August 29, 2018 at 3:05 pm #

    Hi Jason,

    I am trying to save my model using joblib.dump(model, ‘model.pkl’) and load it back up in another .py file model = joblib.load(‘model.pkl’) but then the accuracy dropped and each time I run it the accuracy differs a lot. I coefficient and the intercept and the same for both models. Any ideas why this might happen. Thanks in advance.

  55. Dhrumil September 1, 2018 at 3:11 pm #

    Hey man I am facing a trouble with pickle, when I try to load my .pkl model I am getting following error :

    UnicodeDecodeError: ‘ascii’ codec can’t decode byte 0xbe in position 3: ordinal not in range(128)

    Can you please tell me something since I have tried all fixes I could find..

  56. Aakash Aggarwal September 8, 2018 at 4:57 am #

    I want to develop to train my model and save in pickle file. From the next time onwards, when i want to train the model, it should save in previously created pickle file in append mode that reduces the time of training the model. I am using LogisticRegression model.

    Any helps would be greatly appreciated.

  57. My3 October 15, 2018 at 10:11 pm #

    Hi Jason,

    I have some requirement to integrate python code with Java.

    I have a ML model which is trained as saved as pickle file, Randomforestclassifier.pkl. I want to load this one time using java and then execute my “prediction” part code which is written python. So my workflow is like:

    1. Read Randomforestclassifier.pkl file (one time)
    2. Send this model as input to function defined in “python_file.py” which is executed from java for each request
    3. python_file.py has prediction code and predictions returned should be captured by java code
    Please provide suggestions for this workflow requirement I have used processbuilder in java to execute python_file.py and everything works fine except for model loading as one time activity.

    Can you help me with some client server python programming without using rest APIs for one time model loading?

    • Jason Brownlee October 16, 2018 at 6:37 am #

      I recommend treating it like any other engineering project, gather requirements, review options, minimize risk.

    • Rahul October 18, 2018 at 6:10 pm #

      Hi Jason,My3,

      I have a similar requirement to integrate java with python as my model is in python and in my project we are using java.

      Could you please help here.

  58. Theekshana October 30, 2018 at 12:35 am #

    Hi Jason,

    I have trained my model and evaluated the accuracy using cross-validation score.

    After evaluating the model, should I train my model with the whole data set and then save the new trained model for new future data. (assuming the new model performs with good accuracy around mean accuracy from cross-validation)

    Thank you for your tutorials and instant replies to questions. 🙂

  59. Gagan December 11, 2018 at 5:56 pm #

    Jason, thanks so much for value add.

Leave a Reply