Metrics To Evaluate Machine Learning Algorithms in Python

The metrics that you choose to evaluate your machine learning algorithms are very important.

Choice of metrics influences how the performance of machine learning algorithms is measured and compared. They influence how you weight the importance of different characteristics in the results and your ultimate choice of which algorithm to choose.

In this post, you will discover how to select and use different machine learning performance metrics in Python with scikit-learn.

Kick-start your project with my new book Machine Learning Mastery With Python, including step-by-step tutorials and the Python source code files for all examples.

Let’s get started.

  • Update Jan/2017: Updated to reflect changes to the scikit-learn API in version 0.18.
  • Update Mar/2018: Added alternate link to download the dataset as the original appears to have been taken down.
  • Update Nov/2019: Improve description of ROC AUC.
  • Update Aug/2020: Updated for changes to the API.
Metrics To Evaluate Machine Learning Algorithms in Python

Metrics To Evaluate Machine Learning Algorithms in Python
Photo by Ferrous Büller, some rights reserved.

About the Recipes

Various different machine learning evaluation metrics are demonstrated in this post using small code recipes in Python and scikit-learn.

Each recipe is designed to be standalone so that you can copy-and-paste it into your project and use it immediately.

Metrics are demonstrated for both classification and regression type machine learning problems.

  • For classification metrics, the Pima Indians onset of diabetes dataset is used as demonstration. This is a binary classification problem where all of the input variables are numeric (update: download from here).
  • For regression metrics, the Boston House Price dataset is used as demonstration. this is a regression problem where all of the input variables are also numeric (update: download data from here).

In each recipe, the dataset is downloaded directly.

All recipes evaluate the same algorithms, Logistic Regression for classification and Linear Regression for the regression problems. A 10-fold cross-validation test harness is used to demonstrate each metric, because this is the most likely scenario where you will be employing different algorithm evaluation metrics.

A caveat in these recipes is the cross_val_score function used to report the performance in each recipe.It does allow the use of different scoring metrics that will be discussed, but all scores are reported so that they can be sorted in ascending order (largest score is best).

Some evaluation metrics (like mean squared error) are naturally descending scores (the smallest score is best) and as such are reported as negative by the cross_val_score() function. This is important to note, because some scores will be reported as negative that by definition can never be negative.

You can learn more about machine learning algorithm performance metrics supported by scikit-learn on the page Model evaluation: quantifying the quality of predictions.

Let’s get on with the evaluation metrics.

Need help with Machine Learning in Python?

Take my free 2-week email course and discover data prep, algorithms and more (with code).

Click to sign-up now and also get a free PDF Ebook version of the course.

Classification Metrics

Classification problems are perhaps the most common type of machine learning problem and as such there are a myriad of metrics that can be used to evaluate predictions for these problems.

In this section we will review how to use the following metrics:

  1. Classification Accuracy.
  2. Log Loss.
  3. Area Under ROC Curve.
  4. Confusion Matrix.
  5. Classification Report.

1. Classification Accuracy

Classification accuracy is the number of correct predictions made as a ratio of all predictions made.

This is the most common evaluation metric for classification problems, it is also the most misused. It is really only suitable when there are an equal number of observations in each class (which is rarely the case) and that all predictions and prediction errors are equally important, which is often not the case.

Below is an example of calculating classification accuracy.

You can see that the ratio is reported.

Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.

This can be converted into a percentage by multiplying the value by 100, giving an accuracy score of approximately 77% accurate.

2. Log Loss

Logistic loss (or log loss) is a performance metric for evaluating the predictions of probabilities of membership to a given class.

The scalar probability between 0 and 1 can be seen as a measure of confidence for a prediction by an algorithm. Predictions that are correct or incorrect are rewarded or punished proportionally to the confidence of the prediction.

For more on log loss and it’s relationship to cross-entropy, see the tutorial:

Below is an example of calculating log loss for Logistic regression predictions on the Pima Indians onset of diabetes dataset.

Smaller log loss is better with 0 representing a perfect log loss.

Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.

As mentioned above, the measure is inverted to be ascending when using the cross_val_score() function.

3. Area Under ROC Curve

Area Under ROC Curve (or ROC AUC for short) is a performance metric for binary classification problems.

The AUC represents a model’s ability to discriminate between positive and negative classes. An area of 1.0 represents a model that made all predictions perfectly. An area of 0.5 represents a model as good as random.

A ROC Curve is a plot of the true positive rate and the false positive rate for a given set of probability predictions at different thresholds used to map the probabilities to class labels. The area under the curve is then the approximate integral under the ROC Curve.

For more on ROC Curves and ROC AUC, see the tutorial:

The example below provides a demonstration of calculating AUC.

Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.

You can see the the AUC is relatively close to 1 and greater than 0.5, suggesting some skill in the predictions.

4. Confusion Matrix

The confusion matrix is a handy presentation of the accuracy of a model with two or more classes.

The table presents predictions on the x-axis and accuracy outcomes on the y-axis. The cells of the table are the number of predictions made by a machine learning algorithm.

For example, a machine learning algorithm can predict 0 or 1 and each prediction may actually have been a 0 or 1. Predictions for 0 that were actually 0 appear in the cell for prediction=0 and actual=0, whereas predictions for 0 that were actually 1 appear in the cell for prediction = 0 and actual=1. And so on.

For more on the confusion matrix, see this tutorial:

Below is an example of calculating a confusion matrix for a set of prediction by a model on a test set.

Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.

Although the array is printed without headings, you can see that the majority of the predictions fall on the diagonal line of the matrix (which are correct predictions).

5. Classification Report

Scikit-learn does provide a convenience report when working on classification problems to give you a quick idea of the accuracy of a model using a number of measures.

The classification_report() function displays the precision, recall, f1-score and support for each class.

The example below demonstrates the report on the binary classification problem.

Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.

You can see good prediction and recall for the algorithm.

Regression Metrics

In this section will review 3 of the most common metrics for evaluating predictions on regression machine learning problems:

  1. Mean Absolute Error.
  2. Mean Squared Error.
  3. R^2.

1. Mean Absolute Error

The Mean Absolute Error (or MAE) is the average of the absolute differences between predictions and actual values. It gives an idea of how wrong the predictions were.

The measure gives an idea of the magnitude of the error, but no idea of the direction (e.g. over or under predicting).

You can learn more about Mean Absolute error on Wikipedia.

The example below demonstrates calculating mean absolute error on the Boston house price dataset.

A value of 0 indicates no error or perfect predictions.

Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.

Like logloss, this metric is inverted by the cross_val_score() function.

2. Mean Squared Error

The Mean Squared Error (or MSE) is much like the mean absolute error in that it provides a gross idea of the magnitude of error.

Taking the square root of the mean squared error converts the units back to the original units of the output variable and can be meaningful for description and presentation. This is called the Root Mean Squared Error (or RMSE).

You can learn more about Mean Squared Error on Wikipedia.

The example below provides a demonstration of calculating mean squared error.

This metric too is inverted so that the results are increasing.

Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.

Remember to take the absolute value before taking the square root if you are interested in calculating the RMSE.

3. R^2 Metric

The R^2 (or R Squared) metric provides an indication of the goodness of fit of a set of predictions to the actual values. In statistical literature, this measure is called the coefficient of determination.

This is a value between 0 and 1 for no-fit and perfect fit respectively.

You can learn more about the Coefficient of determination article on Wikipedia.

The example below provides a demonstration of calculating the mean R^2 for a set of predictions.

Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.

You can see that the predictions have a poor fit to the actual values with a value close to zero and less than 0.5.

Summary

In this post, you discovered metrics that you can use to evaluate your machine learning algorithms.

You learned about 3 classification metrics:

  • Accuracy.
  • Log Loss.
  • Area Under ROC Curve.

Also 2 convenience methods for classification prediction results:

  • Confusion Matrix.
  • Classification Report.

And 3 regression metrics:

  • Mean Absolute Error.
  • Mean Squared Error.
  • R^2.

Do you have any questions about metrics for evaluating machine learning algorithms or this post? Ask your question in the comments and I will do my best to answer it.

Discover Fast Machine Learning in Python!

Master Machine Learning With Python

Develop Your Own Models in Minutes

...with just a few lines of scikit-learn code

Learn how in my new Ebook:
Machine Learning Mastery With Python

Covers self-study tutorials and end-to-end projects like:
Loading data, visualization, modeling, tuning, and much more...

Finally Bring Machine Learning To
Your Own Projects

Skip the Academics. Just Results.

See What's Inside

127 Responses to Metrics To Evaluate Machine Learning Algorithms in Python

  1. Avatar
    Sayak Paul February 2, 2017 at 6:03 am #

    What do you mean by model_selection?

  2. Avatar
    Arek May 12, 2017 at 6:35 am #

    Hello Jason

    Thanks for this tutorial but i have one question about computing auc.

    I’m doing binary classification with imbalanced classes and then computing auc but i have one problem. Im using keras.

    My method for computing auc looks like this:
    1. Train model and save him – 1st python script
    2. load model and model weiths – 2nd python script
    3. load one image (loop) and save result to csv file -2nd python script
    4. use roc_auc_score from sklearn

    in 3rd point im loading image and then i’m using predict_proba for result. Results are always from 0-1 but should i use predict proba?.This method is from http://stackoverflow.com/questions/41032551/how-to-compute-receiving-operating-characteristic-roc-and-auc-in-keras
    Eka solution.

    • Avatar
      Jason Brownlee May 12, 2017 at 7:52 am #

      Looks good, I would recommend predict_proba(), I expect it normalizes any softmax output to ensure the values add to one.

  3. Avatar
    Evy May 18, 2017 at 9:11 am #

    Jason,
    Long time reader, first time writer. I am having trouble how to pick which model performance metric will be useful for a current project. Let me give you some background.

    I have a classification model that I really want to maximize my Recall results. The reasoning is that, if I say something is 1 when it is not 1 I lose a lot of time/$, but when I say something is 0 and its is not 0 I don’t lose much time/$ at all. Ie. I want to reduce False Negatives. Also the distribution of the dependent variable in my training set is highly skewed toward 0s, less than 5% of all my dependent variables in the training set are 1s. Normally I would use an F1 score, AUC, VIF, Accuracy, MAE, MSE or many of the other classification model metrics that are discussed, but I am unsure what to use now. Currently I am using LogLoss as my model performance metric as I have found documentation that this is the correct metric to use in cases of a skewed dependent variable, as well a situations where I mainly care about Recall and don’t care much about Precision or visa versa. I received this information from people on the Kaggle forums.

    Thank you for your expert opinion, I very much appreciate your help. If you don’t have time for such I question I will understand.

    • Avatar
      Jason Brownlee May 19, 2017 at 8:08 am #

      Hi Evy, thanks for being a long time reader.

      I would suggest tuning your model and focusing on the recall statistic alone.

      I would also suggest using models that make predictions as a probability and tune the threshold on the probability too to optimize the recall (ROC curves can help understand this).

      I hope that helps as a start.

  4. Avatar
    Jeppe June 9, 2017 at 7:39 pm #

    Hey Jason,

    Thanks for the great articles, I just have a question about the MSE and its properties. When building a linear model, adding features should always lower the MSE in the training data, right?

    It’s just, when I use the polynomial features method in SciKit, and fit a linear regression, the MSE does not necessarily fall, sometimes it rises, as I add features.

    Is it because of some innate properties of the MSE metric, or is it simply because I have a bug in my code?

    • Avatar
      Jason Brownlee June 10, 2017 at 8:21 am #

      Adding features has no guarantee of reducing MSE as far as I know. Where did you get that from?

      • Avatar
        JONATA PAULINO DA COSTA March 18, 2019 at 12:30 am #

        Olá. Moro no Brasil e sempre leio seus posts. Tenho uma rede neural recorrente LSTM e estou fazendo uma classificação binária com uma base de dados do Twitter. Eu estou usando acuracia pra avaliar meu modelo. Você poderia sugeria uma outra maneira de eu avaliar este meu modelo.? Estou usando keras e Python. Se você poder me ajudar com um exemplo eu agradeço.

        • Avatar
          Jason Brownlee March 18, 2019 at 6:06 am #

          It really depends on the specifics of your problem.

          For example, if you are classifying tweets, then perhaps accuracy makes sense. If you are predicting words, then perhaps BLEU or ROGUE makes sense.

      • Avatar
        Anubhav September 7, 2019 at 5:37 am #

        Hi Jason,

        I think where Jeppe is coming from is that by increasing features, we are increasing the complexity of our model, hence we are moving towards overfitting.
        Now in overfitted model, the predicted data points will be much closer to the actual data points and hence the MSE should decrease.

        By the way, I think the same…. :/

        • Avatar
          Jason Brownlee September 7, 2019 at 5:39 am #

          I disagree.

          More features can better expose the structure of the problem and can result in a better fit. The model may or may not overfit, it is an orthogonal concern.

  5. Avatar
    Cheng June 14, 2017 at 3:45 am #

    Hi Jason,

    Thank you for this article. Very helpful! Now I am using Python SciKit Learn to train an imbalanced dataset. I am looking for a good metric embedded in Python SciKit Learn already that works for evaluating the performance of model in predicting imbalanced dataset. Do you have some recommendations or ideas? Alternatively, I knew a judging criterion, balanced error rate (BER), but I have not idea how to use it as a scoring parameter with Python?

    Thank you much!

    Cheng

  6. Avatar
    Huyen August 8, 2017 at 9:17 pm #

    Hi Jason,

    I still have some confusions about the metrics to evaluate regression problem. In cross_val_score of cross validation, the final results are the negative mean squared error and negative mean absolute error, so what does it mean? (It means the model performs poorly or that’s the good sign that the model can minimize the metrics?)

    Additionally, I used some regression methods and they returned very good results such as R_squared = 0.9999 and very small MSE, MSA on the testing part. However the result of cross_val_score is 1.00 +- 00 for example, so it means the model is overfitting?

    So in general, I suppose when we use cross_val_score to evaluate regression model, we should choose the model which has the smallest MSE and MSA, that’s true or not?

    Thank you so much for your answer, that will help me alot

    • Avatar
      Jason Brownlee August 9, 2017 at 6:29 am #

      Good question.

      Generally, the interpretation of the score is specific to the problem. A good score is really only relative to scores you can achieve with other methods.

      Choosing a model depends on your application, but generally, you want to pick the simplest model that gives the best model skill.

  7. Avatar
    Stef August 20, 2017 at 4:39 am #

    Hi Jason,

    I recently read some articles that were completely against using R^2 for evaluating non-linear models (such as in the case of ML algorithms). Given that it is still common practice to use it, whats your take on this?

    Cheers

    • Avatar
      Jason Brownlee August 20, 2017 at 6:08 am #

      I recommend using a few metrics and interpret them in the context of your specific problem.

      I do find R^2 useful.

  8. Avatar
    emily October 5, 2017 at 1:14 am #

    how to choose the right metric for a machine learning problem ?

    • Avatar
      Jason Brownlee October 5, 2017 at 5:24 am #

      You need a metrics that best captures what you are looking to optimize on your specific problem.

      Maybe you need to talk to domain experts. Maybe you need to try out a few metrics and present results to stakeholders. It could be an iterative process.

  9. Avatar
    x November 9, 2017 at 7:54 am #

    How CA depends on the value ‘random_state’?

  10. Avatar
    kono November 12, 2017 at 4:17 am #

    Jason,

    What are differences between loss functions and evaluation metrics? Loss function = evaluation metric – regularization terms?

    Kono

    • Avatar
      Jason Brownlee November 12, 2017 at 9:08 am #

      Great question.

      A loss function is minimized when fitting a model.

      A loss function score can be reported as a model skill, e.g. an evaluation metric, but does not have to be.

      Regularization terms are modifications of a loss function to penalize complex models, e.g. to result in a simpler and often better/more skillful resulting model.

      Does that help?

  11. Avatar
    kono November 12, 2017 at 4:02 pm #

    @Jason, thanks! very helpful!

  12. Avatar
    Robert December 5, 2017 at 9:17 pm #

    Hi Jason,

    I have the following question. Instead of using the MSE in the standard configuration, I want to use it with sample weights, where basically each datapoint would get a different weight (it is a separate column in the original dataframe, but clearly not a feature of the trained model). How would I incorporate those sample weight in the scoring function?

  13. Avatar
    Shabnam December 10, 2017 at 2:16 pm #

    Another awesome and helpful post in your blog. Thanks a million!

  14. Avatar
    Rizwan Mian January 2, 2018 at 3:13 pm #

    In the general case, I see a sensitivity and specificity tradeoff when the classes overlap [1].
    – How can I find the optimal point where both values are high algorithmically using python?
    – Would the classifier give the highest accuracy at this point assuming classes are balanced?

    Thanking in advance

    [1] https://www.youtube.com/watch?v=vtYDyGGeQyo

    • Avatar
      Jason Brownlee January 2, 2018 at 4:01 pm #

      You might want to look into ROC curves and model calibration.

  15. Avatar
    Prashant February 21, 2018 at 7:08 pm #

    Hi Jason,

    I would love to see a similar post on unsupervised learning algorithms metric.
    From my side, I only knew adjusted rand score as one of the metric.

  16. Avatar
    Matthieu February 24, 2018 at 10:25 am #

    Hi Jason,

    Thank you for this detailed explanation of the metrics. I would have however a question about my problem. I have a binary classification problem, where I am interested in accuracy of prediction of both negative and positive classes and negative class has bigger instances than positive class.

    1) In that case, would it be better to use “roc_auc” or “f1-score” metric to optimize accuracy of classifier ?
    2) Would it be better to use class or probabilities prediction ? In the latter case how to optimize the calibration of the classifier ?

    Many thanks in advance for your help !

  17. Avatar
    Dan April 3, 2018 at 4:23 pm #

    Thanks Jason, very helpful information as always! Which one of these tests could also work for non-linear learning algorithms? Or are you aware of any sources that might help answer this question? Eg. results produced from SVC with rbf kernal?

    • Avatar
      Jason Brownlee April 4, 2018 at 6:07 am #

      They are all suitable for linear and nonlinear methods.

  18. Avatar
    David April 23, 2018 at 2:08 am #

    Hey Jason

    Are MSE and MAE only used to compare models of the same dataset? The reason I ask is that I used an autoregression on sensory data from lets say t = 0s to t = 50s and then used the autoregression parameters to predict the time series data from t = 50s to t = 100s. The values are very small and so I get small MSE and MAE values but it doesn’t really mean anything. Is there any way to get an absolute score of your predictions, MSE and MAE seem to be highly dependent on your dataset magnitude, and I can only seemed them as a way to compare models of the same dataset.

    • Avatar
      Jason Brownlee April 23, 2018 at 6:18 am #

      Perhaps you can rescale your data to the range [0-1] prior to modeling?

  19. Avatar
    vaibhav kumar May 28, 2018 at 6:02 pm #

    Dear Jason,

    Thank you for your informative post.

    For categorical variables with more than two potential values, how are their accuracy measures and F-scores calculated?

    I have a dataset with variables (Population class, building type, Total floors) Building Type with possible values (Residential, commercial, Industry, Special Buildings), population class (High, MED, LOW) and the total floor is a numerical variable with values ranging from 1 to 35. After training the data I wanted to predict the “population class”. I applied SVM on the datasets. How are the accuracy measures and F-scores calculated for my case? Is accuracy measure and F-Score a good metric for a categorical variable with values more than one? Am I doing the correct thing by evaluating the classification of the categorical variable (population class) with more than two potential values (High, MED, LOW)? What if any variable is an ordinal variable should the same metric and classification algorithms are applied to predict which are applied to binary variables?

  20. Avatar
    Reed Guo June 7, 2018 at 5:17 pm #

    Hi, Jason

    I have a question and cannot find a good answer in the Internet. And in this post, it is not mentioned neither.

    I use R^2 as the metrics to evaluate regression model. In which range it can indicate this is a good model?

    For example:

    R^2 >= 90%: perfect
    R^2 >= 80: very good
    R^2 >= 70: good
    R^2 >= 60: poor
    R^2 <= 60%: rubbish

    Thank you very much.

    • Avatar
      Jason Brownlee June 8, 2018 at 6:06 am #

      Good question, I have seen tables like this in books on “effect size” in statistics.

      Try searching on google/google books/google scholar.

      I hope that helps.

  21. Avatar
    Reed Guo June 15, 2018 at 3:32 pm #

    Thank you very much Jason.

  22. Avatar
    ND June 20, 2018 at 12:49 pm #

    Hi Jason,
    I’m working on a classification problem with unbalanced dataset. I’m using recall/precision and confusion matrix as my evaluation metrics. Initially in my dataset, the observation ratio for class ‘1’ to class ‘0’ is 1:7 so I use SMOTE and up-sample the minority class in training set to make the ratio 3:5 (i.e. 60% class ‘1’ observations).

    On validation set, I get the following metrics:
    At Prob threshold: 0.3
    Recall score: 0.79
    Precision score: 0.54
    f1 score: 0.64
    AUC score: 0.845674177201395

    On test set, I get the following metrics:
    w/ default .predict() threshold I get
    Recall score: 0.91
    Precision score: 0.45
    f1 score: 0.60

    But at Prob threshold: 0.7, I get the following on my test set
    Recall score: 0.8
    Precision score: 0.61
    f1 score: 0.69
    AUC score: 0.8

    My question is: is it ok to select a different threshold for test set for optimal recall/precision scores as compared to the training/validation set?

    Also could you please suggest options to improve precision while maintaining recall.

    Thanks,
    ND

    • Avatar
      Jason Brownlee June 21, 2018 at 6:07 am #

      No, threshold must be chosen on a validation set and used on a test set.

      When using a test set, we are assuming we do not know the answers and the result we get is the result we get.

      • Avatar
        ND June 21, 2018 at 2:15 pm #

        Thanks Jason. Could you recommend some options to explore in order to improve precision while maintaining recall scores for imbalanced dataset based ml models?

        Appreciate your blogs. I’ve referred to a few of them and they’ve really helpful in building my ml code.

        ND

        • Avatar
          Jason Brownlee June 21, 2018 at 4:58 pm #

          You could use a precision-recall curve and tune the threshold.

  23. Avatar
    gautham July 15, 2018 at 12:40 pm #

    Hello guys… Am trying to tag the parts of speech for a text using pos_tag function that was implemented by perceptron tagger. After tagging the text i want to calculate the accuracy of input with any corpus either brown or conll2000 or tree bank.. How to find that accuracy?? Can anyone please help me out from this problem…

    • Avatar
      Jason Brownlee July 16, 2018 at 6:10 am #

      Sorry, I don’t have tutorials on part of speech tagging.

  24. Avatar
    Claire August 18, 2018 at 10:34 pm #

    Hi Jason,

    Thanks for your clear explanations.

    This page looks at classification and regression problems. I’m working on a segmentation problem, classifying land cover from remotely sensed imagery. What do you think is the best evaluation metric for this case?

    • Avatar
      Jason Brownlee August 19, 2018 at 6:21 am #

      It is hard for me to say. Some ideas:

      Talk to stakeholders and nut out what is the most important way of evaluating skill of a model?
      Review the literature and see what types of metrics are being used on similar problems?
      Try a few metrics and see if they capture what is important?

    • Avatar
      VK May 21, 2020 at 9:45 am #

      @Claire: I am also facing a similar situation as yours as I am working with SAR images for segmentation. Have you been able to find some evaluation metrics for the segmentation part especially in the field of remote sensing image segmentation?
      Thank you.

  25. Avatar
    J.Straub September 18, 2018 at 7:25 pm #

    Hi Jason,
    i’m working on a multi-variate regression problem. Which regression metrics can I use for evaluation?

    Thanks in advance!

  26. Avatar
    dy October 4, 2018 at 8:03 pm #

    hi jason, its me again. -34.705 (45.574), whats the value in bracket? tq!

  27. Avatar
    omar October 20, 2018 at 9:16 pm #

    how can we print classification report of more than one models through array

    • Avatar
      Jason Brownlee October 21, 2018 at 6:11 am #

      Use a for loop and enumerate over the models calling print() for each report you require.

  28. Avatar
    Felipe October 24, 2018 at 1:54 pm #

    Is it possible to plot the ROC curve by using the cross_val_score function? Because I see many examples making a for instead of using the function.

    • Avatar
      Jason Brownlee October 24, 2018 at 2:49 pm #

      I don’t think so, a curve is for a single set of predictions. With CV, you would have k curves I guess.

  29. Avatar
    salma December 18, 2018 at 1:55 am #

    How to get the performance for each class (if binary for the class 0 and for the class 1) using cross_val_score function?
    And thank you.

  30. Avatar
    Josh Zastrow January 9, 2019 at 5:51 am #

    So what if you have a classification problem where the categories are ordinal? For example, classify shirt size but there is XS, S, M, L, XL, XXL. Accuracy or ROC curves wouldn’t tell the whole truth… does MAE or MSE make more sense?

    • Avatar
      Jason Brownlee January 9, 2019 at 8:50 am #

      Perhaps. Some cases/testing may be required to settle on a measure of performance that makes sense for the project.

  31. Avatar
    Atharva Thanekar February 4, 2019 at 5:58 pm #

    hey i have one question
    How do we calculate the accuracy,sensitivity, precision and specificity from rmse value of regression model..plz help

  32. Avatar
    Ghofrane February 10, 2019 at 6:47 am #

    Hi Jason,
    thank you for this kind of posts and comments!

    I’m working on a regression problem with a cross sectional dataset.I’m using RMSE and NAE (Normalized Absolute Error).

    It would be very helpful if you could answer the following questions:

    – How do we interpret the values of NAE and compare the performances based upon them (I know the smaller the better but I mean interpretation with regard to the average)?
    I got these values of NAE for different models:
    Model1: 0.629
    Model2: 1.02
    Model3: 0.594
    Model4: 0.751

    – what could be the reason of different ranking when using RMSE and NAE?

    Thank you in advance!

    • Avatar
      Jason Brownlee February 10, 2019 at 9:46 am #

      Compare all results to a naive baseline, e.g. comparisons are relative.

      I have never heard of NAE, sorry.

  33. Avatar
    Teklie February 22, 2019 at 8:18 am #

    Thanks for your valuable information. Just one question

    – How can we continuously evaluate(test) machine learning models after deployment?

  34. Avatar
    Prashant Priyadarshi April 9, 2019 at 2:34 pm #

    Sir,
    What should be the class of all input variables (numeric or categorical) for Linear Regression, Logistic Regression, Decision Tree, Random Forest, SVM, Naive Bayes, KNN…. etc.. etc

  35. Avatar
    Gilles Xiberras April 29, 2019 at 3:23 am #

    Hello Jason,

    as usual, your posts are a gold mine. 🙂

    you wrote :

    “The Mean Absolute Error (or MAE) is the sum of the absolute differences between predictions and actual values. It gives an idea of how wrong the predictions were.”

    I suppose that you forgot to mention “the sum … divided by the number of observations” or replace the “sum” by “mean”

    Cheers Gilles.

  36. Avatar
    Michael May 22, 2019 at 3:21 pm #

    Hello, how can one compare minimum spanning tree algorithm, shortest path algorithm and salesman problem using metric evaluation algorithm.

    • Avatar
      Jason Brownlee May 23, 2019 at 5:52 am #

      Perhaps based on the min distance found across a suite of contrived problems scaling in difficulty?

  37. Avatar
    Abhijit Ghosh June 10, 2019 at 1:55 pm #

    Hi, Nice blog 🙂 . Can you suggest me some review article on the different kinds of error metrics in ML and Deep Learning ? Thanks

      • Avatar
        Suvi August 8, 2019 at 9:14 pm #

        Hi Jason, excellent post! I am a biologist in a team working on developing image-based machine learning algorithms to analyse cellular behavior based on multiple parameters simultaneously. For me the most “logical” way to present whether our algorithm is good at doing what it’s meant to do is to use the classification accuracy. However, the non-biologists argue we should use the R-squared value for this purpose. How can we decide which is the best metrics to use, and also: what is the most used one for this type of data, when we want most of our audience to understand how amazing our algorithm is 🙂 ? Thank you.

        • Avatar
          Jason Brownlee August 9, 2019 at 8:14 am #

          Great question.

          There’s no easy answer.

          You have to start with an idea of what is valued in a model and then how to measure that. It may require using best practices in the field or talking to lots of experts and doing some hard thinking.

          Sometimes it helps to pick one measure to choose a model and another to present the model, e.g. minimize loss on validation dataset then classification accuracy on a test set.

          I hope that helps.

  38. Avatar
    Mwh August 17, 2019 at 12:02 am #

    Thanks Jason,,

    How can i print all the three metrics for regression together. I do not want to do cross_val_score three times.

    Thanks

    • Avatar
      Jason Brownlee August 17, 2019 at 5:47 am #

      Sorry, I don’t follow. What do you mean exactly?

  39. Avatar
    Mwh August 17, 2019 at 12:58 am #

    Also, what you think about Mean absolute percentage error(MAPE) https://en.wikipedia.org/wiki/Mean_absolute_percentage_error,, as a way to report about accuracy in a regression model. Does not sound academic approach to report as a result since it is easier to interpreter,, mae give large numbers e.g., 150 since y values in my data set usually >1000. Thanks

  40. Avatar
    Anam September 8, 2019 at 4:36 pm #

    Hy Jason,
    An amazing and helpful content…i have a query here that i am applying deep neural network such as LSTM,BILSTM,BIGRU,GRU,RNN, and SimpleRNN and all these models gives same accuracy on the dataset that is

    LSTM = 93%,BILSTM= 93%,BIGRU= 93%,GRU= 93%,RNN= 93%, and SimpleRNN= 93%.

    i want to know that why this happen. kindly can you please guide me about the issue. Thanks in advance.

    • Avatar
      Jason Brownlee September 9, 2019 at 5:13 am #

      Perhaps the problem is easy?
      Perhaps RNNs are not appropriate for your problem?
      Perhaps the models require tuning?
      Perhaps the data requires a different preparation?

  41. Avatar
    Mohit October 9, 2019 at 3:51 am #

    Hi ,Jason

  42. Avatar
    taissir October 17, 2019 at 7:20 pm #

    thanks for you good paper, I want to know how to use yellowbrick module for multiclass classification using a specific model that didn’t exist in the module means our own model
    thanks

  43. Avatar
    PleaseAnswerMe February 6, 2020 at 10:23 pm #

    Let’s assume i have trained two classification models for the same dataset. How will i know which model is the best? how to choose which metric?

    • Avatar
      Jason Brownlee February 7, 2020 at 8:16 am #

      Evaluate on a hold out dataset and choose the one with the best skill and lowest complexity – whatever is most important on your specific project.

  44. Avatar
    ana February 7, 2020 at 1:44 am #

    How can we calculate classification report for different values of k-fold values?

  45. Avatar
    Akash Saha March 7, 2020 at 7:12 pm #

    hello sir, i hve been following your site and it is really informative .Thanks for the effort.

    My question here is we use log_loss for the True labels and the predicted labels as parameters right?
    Here you are using in the kfold method:

    kfold = model_selection.KFold(n_splits=10, random_state=seed)
    model = LogisticRegression()
    scoring = ‘neg_log_loss’
    results = model_selection.cross_val_score(model, X, Y, cv=kfold, scoring=scoring)

    Y is the true label or target and X are the data points.So where are we using the probability values predicted by the model to calculate log_loss values?

    Should not log_loss be calculated on predicted probability values???

    • Avatar
      Jason Brownlee March 8, 2020 at 6:09 am #

      The cross_val_score is fitting models for each cross validation folds, making predictions and scoring them for us.

      • Avatar
        Akash Saha March 9, 2020 at 12:56 am #

        ok Thank you sir!

  46. Avatar
    John April 22, 2020 at 10:23 am #

    Hello Jason,

    Below I have a sample output of a multi-class classification report in a spot check. I have a couple of questions for understanding classification evaluation metrics for the spot checked model.

    1. There is a harmonic balance between precision and recall for class 2 since its about 50%
    2. Take class 1 for example: it is only able to predict it 22% of it correctly out of the possible class 1s (.22 recall)?
    3. Overall the general sentiment is that this model is “bad”, but better than a random guess(33%)?

    Dataset count of each class: ({2: 11293, 0: 8466, 1: 8051})
    Accuracy: 0.41
    Classification report:
    precision recall f1-score support

    0 0.34 0.24 0.28 2110
    1 0.35 0.22 0.27 1996
    2 0.46 0.67 0.54 2846

    accuracy 0.41 6952
    macro avg 0.38 0.38 0.37 6952
    weighted avg 0.39 0.41 0.39 6952

  47. Avatar
    John April 24, 2020 at 1:27 pm #

    Wow, thank you! This not only helped me understand more the metrics that best apply to my classification problem but also I can answer question 3 now. 🙂

  48. Avatar
    Vivek August 12, 2020 at 12:16 am #

    Which is the best evaluation metric for non linear multi out regression?

    • Avatar
      Jason Brownlee August 12, 2020 at 6:10 am #

      The one that best captures the goals of your project.

  49. Avatar
    Mihai August 30, 2020 at 9:08 am #

    I think sklearn did some updates because I can’t run any code from this page

    /usr/local/lib/python3.6/dist-packages/sklearn/model_selection/_split.py:296: FutureWarning: Setting a random_state has no effect since shuffle is False. This will raise an error in 0.24. You should leave random_state to its default (None), or set shuffle=True.
    FutureWarning
    /usr/local/lib/python3.6/dist-packages/sklearn/linear_model/_logistic.py:940: ConvergenceWarning: lbfgs failed to converge (status=1):
    STOP: TOTAL NO. of ITERATIONS REACHED LIMIT.

    Increase the number of iterations (max_iter) or scale the data as shown in:
    https://scikit-learn.org/stable/modules/preprocessing.html
    Please also refer to the documentation for alternative solver options:
    https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression
    extra_warning_msg=_LOGISTIC_SOLVER_CONVERGENCE_MSG)
    /usr/local/lib/python3.6/dist-packages/sklearn/linear_model/_logistic.py:940: ConvergenceWarning: lbfgs failed to converge (status=1):
    STOP: TOTAL NO. of ITERATIONS REACHED LIMIT.
    _____etc

    TypeError Traceback (most recent call last)
    in ()
    14 scoring = ‘accuracy’
    15 results = model_selection.cross_val_score(model, X, Y, cv=kfold, scoring=scoring)
    —> 16 print(“Accuracy: %.3f (%.3f)”) % (results.mean(), results.std())

    TypeError: unsupported operand type(s) for %: ‘NoneType’ and ‘tuple

    • Avatar
      Jason Brownlee August 31, 2020 at 6:02 am #

      Thanks, I have updated the code examples for changes in the API.

      • Avatar
        Mihai August 31, 2020 at 6:46 am #

        Thank you!. Btw, the cross_val_score link is borken (“A caveat in these recipes is the cross_val_score function”)

  50. Avatar
    Mihai August 31, 2020 at 6:55 am #

    FYI, I run the first piece of code, from 1. Classification Accuracy and i still get some errors:

    Accuracy: %.3f (%.3f)
    —————————————————————————
    TypeError Traceback (most recent call last)
    in ()
    14 scoring = ‘accuracy’
    15 results = model_selection.cross_val_score(model, X, Y, cv=kfold, scoring=scoring)
    —> 16 print(“Accuracy: %.3f (%.3f)”) % (results.mean(), results.std())

    TypeError: unsupported operand type(s) for %: ‘NoneType’ and ‘tuple’

    • Avatar
      Jason Brownlee August 31, 2020 at 7:37 am #

      Ouch, sorry about that! Fixed as well.

      • Avatar
        Mihai August 31, 2020 at 9:03 am #

        No worries, glad that I can help!

  51. Avatar
    PRANITA K September 2, 2020 at 2:41 pm #

    Hi how to get prediction accuracy of autoencoders???

    • Avatar
      Jason Brownlee September 3, 2020 at 5:59 am #

      Generally we don’t use accuracy for autoencoders. We would use reconstruction error.

  52. Avatar
    Anugrah April 3, 2021 at 5:17 pm #

    Dear Jason,

    I do have a multi class classification dataset. I made a simple dense network with few layers and trained on it with the given data set with softmax layer and categorical cross entropy loss.

    The model gave good results when printed the confusion matrix and Kappa score (0.92) for test data.

    But I am not sure if I have used the correct metric while training the model. I used metric =[“accuracy”] while compiling the model. Since it is a multi class data set with imbalanced class, should I not be using Kappa score insted of accuracy, so that I can see the performance of the model in terms of Kappa score insted of accuracy in each iteration. Is there any way for me to implement this in keras?

  53. Avatar
    Erastus Musyoka September 8, 2021 at 12:40 am #

    Hi Jason;
    I am working on a linear regression ANN model for prediction. I used MSE and MAE for metrics but my peer reviewer has recommended use of U-Factors in evaluation of the model performance…How can go about it?

    Thanks in advance

    • Avatar
      Adrian Tam September 8, 2021 at 2:04 am #

      What is U-factor?

  54. Avatar
    Erastus Musyoka September 8, 2021 at 11:38 am #

    U_quality and U_Accuracy

    • Avatar
      Adrian Tam September 9, 2021 at 4:28 am #

      Sorry, not heard of these.

  55. Avatar
    abdulkhalik June 13, 2022 at 12:14 am #

    Hi prof.Brownlee
    i am working on multiple linear regression how can i obtain r2 for each row.
    thank you.

Leave a Reply