How to Make Predictions with scikit-learn

How to predict classification or regression outcomes
with scikit-learn models in Python.

Once you choose and fit a final machine learning model in scikit-learn, you can use it to make predictions on new data instances.

There is some confusion amongst beginners about how exactly to do this. I often see questions such as:

How do I make predictions with my model in scikit-learn?

In this tutorial, you will discover exactly how you can make classification and regression predictions with a finalized machine learning model in the scikit-learn Python library.

After completing this tutorial, you will know:

  • How to finalize a model in order to make it ready for making predictions.
  • How to make class and probability predictions in scikit-learn.
  • How to make regression predictions in scikit-learn.

Let’s get started.

Gentle Introduction to Vector Norms in Machine Learning

Gentle Introduction to Vector Norms in Machine Learning
Photo by Cosimo, some rights reserved.

Tutorial Overview

This tutorial is divided into 3 parts; they are:

  1. First Finalize Your Model
  2. How to Predict With Classification Models
  3. How to Predict With Regression Models

1. First Finalize Your Model

Before you can make predictions, you must train a final model.

You may have trained models using k-fold cross validation or train/test splits of your data. This was done in order to give you an estimate of the skill of the model on out-of-sample data, e.g. new data.

These models have served their purpose and can now be discarded.

You now must train a final model on all of your available data.

You can learn more about how to train a final model here:

2. How to Predict With Classification Models

Classification problems are those where the model learns a mapping between input features and an output feature that is a label, such as “spam” and “not spam.”

Below is sample code of a finalized LogisticRegression model for a simple binary classification problem.

Although we are using LogisticRegression in this tutorial, the same functions are available on practically all classification algorithms in scikit-learn.

After finalizing your model, you may want to save the model to file, e.g. via pickle. Once saved, you can load the model any time and use it to make predictions. For an example of this, see the post:

For simplicity, we will skip this step for the examples in this tutorial.

There are two types of classification predictions we may wish to make with our finalized model; they are class predictions and probability predictions.

Class Predictions

A class prediction is: given the finalized model and one or more data instances, predict the class for the data instances.

We do not know the outcome classes for the new data. That is why we need the model in the first place.

We can predict the class for new data instances using our finalized classification model in scikit-learn using the predict() function.

For example, we have one or more data instances in an array called Xnew. This can be passed to the predict() function on our model in order to predict the class values for each instance in the array.

Multiple Class Predictions

Let’s make this concrete with an example of predicting multiple data instances at once.

Running the example predicts the class for the three new data instances, then prints the data and the predictions together.

Single Class Prediction

If you had just one new data instance, you can provide this as instance wrapped in an array to the predict() function; for example:

Running the example prints the single instance and the predicted class.

A Note on Class Labels

When you prepared your data, you will have mapped the class values from your domain (such as strings) to integer values. You may have used a LabelEncoder.

This LabelEncoder can be used to convert the integers back into string values via the inverse_transform() function.

For this reason, you may want to save (pickle) the LabelEncoder used to encode your y values when fitting your final model.

Probability Predictions

Another type of prediction you may wish to make is the probability of the data instance belonging to each class.

This is called a probability prediction where given a new instance, the model returns the probability for each outcome class as a value between 0 and 1.

You can make these types of predictions in scikit-learn by calling the predict_proba() function, for example:

This function is only available on those classification models capable of making a probability prediction, which is most, but not all, models.

The example below makes a probability prediction for each example in the Xnew array of data instance.

Running the instance makes the probability predictions and then prints the input data instance and the probability of each instance belonging to the first and second classes (0 and 1).

This can be helpful in your application if you want to present the probabilities to the user for expert interpretation.

3. How to Predict With Regression Models

Regression is a supervised learning problem where, given input examples, the model learns a mapping to suitable output quantities, such as “0.1” and “0.2”, etc.

Below is an example of a finalized LinearRegression model. Again, the functions demonstrated for making regression predictions apply to all of the regression models available in scikit-learn.

We can predict quantities with the finalized regression model by calling the predict() function on the finalized model.

As with classification, the predict() function takes a list or array of one or more data instances.

Multiple Regression Predictions

The example below demonstrates how to make regression predictions on multiple data instances with an unknown expected outcome.

Running the example makes multiple predictions, then prints the inputs and predictions side-by-side for review.

Single Regression Prediction

The same function can be used to make a prediction for a single data instance as long as it is suitably wrapped in a surrounding list or array.

For example:

Running the example makes a single prediction and prints the data instance and prediction for review.

Further Reading

This section provides more resources on the topic if you are looking to go deeper.

Summary

In this tutorial, you discovered how you can make classification and regression predictions with a finalized machine learning model in the scikit-learn Python library.

Specifically, you learned:

  • How to finalize a model in order to make it ready for making predictions.
  • How to make class and probability predictions in scikit-learn.
  • How to make regression predictions in scikit-learn.

Do you have any questions?
Ask your questions in the comments below and I will do my best to answer.

Frustrated With Python Machine Learning?

Master Machine Learning With Python

Develop Your Own Models in Minutes

…with just a few lines of scikit-learn code

Discover how in my new Ebook:
Machine Learning Mastery With Python

Covers self-study tutorials and end-to-end projects like:
Loading data, visualization, modeling, tuning, and much more…

Finally Bring Machine Learning To
Your Own Projects

Skip the Academics. Just Results.

Click to learn more.

49 Responses to How to Make Predictions with scikit-learn

  1. Mitch Sanders April 6, 2018 at 5:42 am #

    Once again, Jason… you’re answering all the questions that need answering.

    I was just working through yesterday how to actually use these highly developed models (which I’ve learned to do expediently from your book by the way) to predict my new input variables. And here in my inbox, you’ve delivered this great article on it!

    Thank you for making us all better at Machine Learning. You’re work here is stupendous and appreciated!

    • Jason Brownlee April 6, 2018 at 6:37 am #

      Thanks Mitch, I’m glad it helps!

      Shoot/post me questions any time.

      • Suresh Kumar April 25, 2018 at 12:10 am #

        sureshkumar0707@gmail.com

        Segmentation can be performed using Python? machine learning can be applied on the segmentation ?

  2. charles milam April 6, 2018 at 7:36 am #

    How does one turn predictions into actions? Say I am predicting user fraud, how would you go about taking any given prediction point and determine the customer for that particular prediction.

    • Jason Brownlee April 6, 2018 at 3:46 pm #

      Great question.

      The use of the predictive model would be embedded within an application that is aware of the current customer for which a prediction is being made.

  3. Jon April 6, 2018 at 11:06 am #

    Great post. Love to see an example of the same in R.

  4. Jurek April 6, 2018 at 5:26 pm #

    I like your explanation but I am missing one thing.
    How do you encode and scale features for Xnew so they match trained data?

    • Jason Brownlee April 7, 2018 at 6:10 am #

      You must sale new data using the procedure you used to scale training data.

      This might mean keeping objects or coefficients used to prepare training data to then apply on new data in the future, such as min/max, a vocab, etc. depending on the problem type.

  5. Hazem April 19, 2018 at 8:38 am #

    Thank you very much for the explanation
    But my question is how to use the Source Code as an .exe Application to use it later without a script engine

    • Jason Brownlee April 19, 2018 at 2:47 pm #

      You can use code in your application as you would any other software engineering project.

      I’m sorry, I am not an expert at creating executable files on Windows. I have not used the platform in nearly 2 decades.

  6. Nimish Bhandare May 3, 2018 at 1:39 am #

    how to save a label encoder and reuse it again across different python files.

    i have encoded my data in training phase and while i am trying to predict labels in testing phase i am not able to get same label encoders so i am getting wrong prediction
    please help..

  7. Nimish Bhandare May 3, 2018 at 2:21 am #

    How to save (pickle) the LabelEncoder used to encode your y values when fitting your final model.

    • Jason Brownlee May 3, 2018 at 6:36 am #

      You can use the pickle dump/load functions directly.

  8. Kevin Burke May 5, 2018 at 4:23 am #

    Hi Jason, (relatively new to ML)

    I have a data frame with,
    1 ID column
    6 feature columns
    1 target column

    when I train/test split the feature and target columns and do predictions etc, that is where I need to map back to the ID.

    I want to be able to view something like this after my predictions:

    A data frame with,
    1 ID column
    6 feature columns
    1 target column
    1 predicted column

    Would you be able to help me with this? Really appreciate it,

    kevin

    • Jason Brownlee May 5, 2018 at 6:26 am #

      The predictions are made in the order of the inputs.

      You can take the array of predictions and align them with your inputs directly and start using them.

      Does that help? If not, what is the specific problem you are having?

      • Kevin Burke May 5, 2018 at 5:45 pm #

        Thanks Jason, I suppose I have reached a point where I can get my final model and I cannot seem to find any information as to what next, i.e. real specifics regarding making predictions with new datasets.
        There are a trillion examples of how to work with train/test split and refining models, but my end goal is taking a ‘complete’ dataset, plugging it into my model prediction and producing back my initial ‘complete’ dataset PLUS my predicted column(s).
        I work for a credit union in DC and I have a list of member data, such things like member number, name, phone number, address, account balances and various other features I would use for prediction. I would like to feed this ‘complete’ dataset into my prediction model and have it spit out my initial ‘complete’ dataset PLUS my predicted column(s) that someone could then use to reach out marketing related messages to the member, depending on the prediction off course.

        Hope that makes sense..

        thanks again Jason, appreciate your time (how do you find the time?!!)

        • Jason Brownlee May 6, 2018 at 6:25 am #

          Yes, that makes sense.

          You will need to write code to take the input to the model (X) and pass it to the model to make predictions model.predict(X) to get the prediction column (yhat).

          You then have the dataset X and the predictions yhat and the rows in one correspond to rows in the other. You can hstack() the arrays to have one large matrix of inputs and predictions.

          What problem specifically are you having in achieving this?

          • Kevin Burke May 8, 2018 at 3:03 am #

            Thank you Jason, not so much a problem as a lack of experience trying to tie it all together, but with your help we’ll get there!!

            thanks again.

          • Jason Brownlee May 8, 2018 at 6:16 am #

            Hang in there Kevin!

  9. Jorge June 4, 2018 at 7:05 am #

    Hello Jason, I’ve got started working with scikit-learn models to predict further values but there is something I don’t clearly understand: Let’s suppose I do have a Stock Exchange price datasets with Date, Open Price, Close Price, and the variation rate from the previous date, for a single asset or position. In order to ‘fit’ a good prediction, I decided to use a Multiple Linear Regression and a Polynomial Feature also: I can obtain a formula even used a support vector machine (SVR) but I don’t know how to predict a NEW dataset, since the previous one has more than one variable (Open Price, Variation Rate, Date). How can I simulate further values?
    Thanks for your response.

    • Jason Brownlee June 4, 2018 at 2:36 pm #

      The tutorial above shows how to make a prediction with new data.

      What problem are you having exactly?

  10. Krushna Borkar June 5, 2018 at 11:08 pm #

    Thank you so much, Jason, for this Great post! can you please tell me, I have used LabelEncoder for three Cities. So now I have to take input from a user as a string and convert them into int using LabelEncoder and provide it to trained model for prediction. Is it correct?

  11. Nanna June 28, 2018 at 10:45 pm #

    Hi Jason, always a pleasure seeing your blogs.

    I’m thinking of a few things in regard to measuring the “accuracy” of a regression model and making use of such a model, would love to hear your thoughts.

    I have problem that can either be framed as a classification problem (discrete labels) but also as a regression problem (a similar example could be price range or exact price ). After trying out a few models, I liked the use of a (random forest) regression model.

    Besides evaluating the model on things like R^2 and RMSE I’m doing a sort of pseudo accuracy evaluation.

    Say I have a prediction and a true value of

    [19.8, 20]

    So by true accuracy as in a classification problem the above is wrong, but if I define a new measure that tolerates answers within something fitting to the problem like +/- 2 or something like +/- 10% of the predicted value then the prediction is correct and the model will have greater accuracy. And then the prediction of a given sample would read something like x +/- y .

    Or how would you display/interpret the predictions made by a regression model? Is it “correct” to measure the success as a pseudo accuracy as above? Or is it more correct and robust to express a prediction using e.g. RMSE as pred = x +/- RMSE ? Should I avoid this line of thinking when it comes to regression problems completely? And if such, how would I display my prediction of a given sample with a fitting confidence since the regression model typically is close but not always spot on the true value?

  12. Harsha July 11, 2018 at 9:23 pm #

    Hi Jason,

    when I am assigning the X_Test to y_pred, it is returning the below shown error, can you please explain why?

    y_pred = classifier.predict(X_Test)

    Error:

    NotFittedError Traceback (most recent call last)
    in ()
    —-> 1 y_pred = classifier.predict(X_Test)

    C:\Anaconda\lib\site-packages\sklearn\neighbors\classification.py in predict(self, X)
    143 X = check_array(X, accept_sparse=’csr’)
    144
    –> 145 neigh_dist, neigh_ind = self.kneighbors(X)
    146
    147 classes_ = self.classes_

    C:\Anaconda\lib\site-packages\sklearn\neighbors\base.py in kneighbors(self, X, n_neighbors, return_distance)
    325 “””
    326 if self._fit_method is None:
    –> 327 raise NotFittedError(“Must fit neighbors before querying.”)
    328
    329 if n_neighbors is None:

    NotFittedError: Must fit neighbors before querying.

    • Jason Brownlee July 12, 2018 at 6:24 am #

      It suggests that perhaps your model has not been fit on the data.

  13. Black Manga August 2, 2018 at 8:49 pm #

    Any suggest how to eliminate predict data if predict data it’s far from data set which have been trained before. example i’m using SVM with label 1 : 4,4,3,4,4,3 label 2: 5,6,7,5,6,5 . and i’m predict data 20, i want the predict data (20) result is “not valid” or don’t show label 1 or 2.

    • Jason Brownlee August 3, 2018 at 6:01 am #

      Sorry, I don’t follow. Are you able to give more context?

      • Black Manga August 9, 2018 at 9:59 pm #

        sorry, if it’s doesn’t clear. Let say i want to make predict it’s apple or orange. suddenly i insert a grape data to predict into model that i have create(apple or orange). In my case the predict result will be apple or orange (i’m using SVM). so, how to know if my input data (grape data) it’s far different from the data train (apple and orange data). Thanks

        • Jason Brownlee August 10, 2018 at 6:13 am #

          You might want to predict probabilities instead of classes and choose to not use a prediction if the predicted probabilities are below a threshold.

          • Black Manga August 10, 2018 at 1:14 pm #

            i will try, thank you very much

  14. Kuler Can August 2, 2018 at 11:35 pm #

    Hello, I used scikit learn to predict google stock prices with MLPRegressor. How can I predict new values beyond dataset specially test data?

    • Jason Brownlee August 3, 2018 at 6:03 am #

      The above post will help. That problem are you having exactly?

  15. Manas August 12, 2018 at 3:31 pm #

    Hi Jason,

    Can I fit a model by multiple K-Fold iteration for very unbalance class as shown below??

    Could You kindly help on this!

    for val in range(0,1000): #total sample is 20k majority class and 20 minority class
    balanced_copy_idx=balanced_subsample(labels,40) #creating each time randomly 20Minority class and 20 majority class
    X1=X[balanced_copy_idx]
    y1=y[balanced_copy_idx]

    kf = KFold(y1.shape[0], n_folds=10,shuffle= True,random_state=3)
    for train_index, test_index in kf:

    X_train, y_train = X1[train_index], y1[train_index]
    X_test, y_test = X1[test_index], y1[test_index]

    vectorizer = TfidfVectorizer(max_features=15000, lowercase = True, min_df=5, max_df = 0.8, sublinear_tf=True, use_idf=True,stop_words=’english’)

    train_corpus_tf_idf = vectorizer.fit_transform(X_train)
    test_corpus_tf_idf = vectorizer.transform(X_test)

    model1 = LogisticRegression()
    model1.fit(train_corpus_tf_idf,y_train)

    • Jason Brownlee August 13, 2018 at 6:15 am #

      Yes you can.

      Sorry, I cannot review and debug your code, perhaps post on stackoverflow?

  16. Gabriel Joshua Migue September 6, 2018 at 1:46 am #

    What is the purpose of random state? when i try to run my prediction the accuracy is not stable but when i input the random state = 0 it gives stable prediction but low accuracy when i change the random state to 100 it give me higher accuracy

  17. Ana September 6, 2018 at 1:51 pm #

    Hi Jason, thank you for your always useful and insightful posts!
    One question that seems to be a recurrent issue regarding predict_proba(). For which sklearn models can it be used? Eg. can it be used for logistic regression, SVM, naive Bayes and random forest? I was playing with it recently for both binary and multiclass classification and it seemed to be producing the following paradox: probability vectors for each sample, in which the smallest probability was assigned to the class that was actually being predicted. Is it possible that predict_proba() generates (1-prob) results?

    • Jason Brownlee September 6, 2018 at 2:15 pm #

      Not all, some don’t support predicting probabilities natively and some that don’t may use a decision_function() instead.

  18. Scriptkidd September 11, 2018 at 12:19 am #

    How can I make the prediction more detailed? Like say I input a hibiscus flower into this model, instead of probabilities, I want to get something like “input not a Iris, it was off by blahblahblah”, and probably take a decision. I think that’s what Black Manga meant in his comments above

    • Jason Brownlee September 11, 2018 at 6:30 am #

      You would have to write this “interpretation” yourself.

  19. Sintyadi Thong September 17, 2018 at 2:18 pm #

    Hi, Jason… It is a great article indeed.
    I have a question,
    I have trained my models and have saved the model.

    The next part that I would like to put it into production to build a .py file which function is only to predict the given sets of parameters.

    How should I code my predict.py file so using command line, I can just input some variables and get the output.

    Should I also add the import functions inside my predict.py?

    Thanks in advance!

    • Jason Brownlee September 18, 2018 at 6:09 am #

      This sounds like a software engineering question, rather than an ML question.

      You can use Python functions to read input and write output from your script. Perhaps check the Python standard API or a good reference text?

  20. MadTech October 10, 2018 at 5:13 pm #

    Thanks for the attempt, but unfortunately, I did not find this post very helpful because it failed to present *simple* examples (IMO). E.g. if the distinction between Classification Models and Regression Models is paramount, please include a link that sheds light on it. Missing here are useful demonstrations on how to perform the *simplest* possible prediction. E.g. a data set as straightforward as world population/year or home price/bathrooms: show how to load the data, then “ask” the algorithm for a prediction for a specific value, e.g. what will the world population be in 2020? What is the predicted home value of a house with 3 bathrooms? Something simpler than unexplained complex variable of Xnew = [[-1.07296862, -0.52817175]] — sorry, I don’t have any idea what that is.

    I know I’m new to ML, but I feel this post could be far more useful if it tackled the simplest possible example and then transitioned up to what is here. Examples examples examples: those are the only things that really matter.

Leave a Reply