Once you choose a machine learning algorithm for your classification problem, you need to report the performance of the model to stakeholders.

This is important so that you can set the expectations for the model on new data.

A common mistake is to report the classification accuracy of the model alone.

In this post, you will discover how to calculate confidence intervals on the performance of your model to provide a calibrated and robust indication of your model’s skill.

Let’s get started.

## Classification Accuracy

The skill of a classification machine learning algorithm is often reported as classification accuracy.

This is the percentage of the correct predictions from all predictions made. It is calculated as follows:

1 |
classification accuracy = correct predictions / total predictions * 100.0 |

A classifier may have an accuracy such as 60% or 90%, and how good this is only has meaning in the context of the problem domain.

## Classification Error

When talking about a model to stakeholders, it may be more relevant to talk about classification error or just error.

This is because stakeholders assume models perform well, they may really want to know how prone a model is to making mistakes.

You can calculate classification error as the percentage of incorrect predictions to the number of predictions made, expressed as a value between 0 and 1.

1 |
classification error = incorrect predictions / total predictions |

A classifier may have an error of 0.25 or 0.02.

This value too can be converted to a percentage by multiplying it by 100. For example, 0.02 would become (0.02 * 100.0) or 2% classification error.

## Validation Dataset

What dataset do you use to calculate model skill?

It is a good practice to hold out a validation dataset from the modeling process.

This means a sample of the available data is randomly selected and removed from the available data, such that it is not used during model selection or configuration.

After the final model has been prepared on the training data, it can be used to make predictions on the validation dataset. These predictions are used to calculate a classification accuracy or classification error.

### Need help with Statistics for Machine Learning?

Take my free 7-day email crash course now (with sample code).

Click to sign-up and also get a free PDF Ebook version of the course.

## Confidence Interval

Rather than presenting just a single error score, a confidence interval can be calculated and presented as part of the model skill.

A confidence interval is comprised of two things:

**Range**. This is the lower and upper limit on the skill that can be expected on the model.**Probability**. This is the probability that the skill of the model will fall within the range.

In general, the confidence interval for classification error can be calculated as follows:

1 |
error +/- const * sqrt( (error * (1 - error)) / n) |

Where error is the classification error, const is a constant value that defines the chosen probability, sqrt is the square root function, and n is the number of observations (rows) used to evaluate the model. Technically, this is called the Wilson score interval.

The values for const are provided from statistics, and common values used are:

- 1.64 (90%)
- 1.96 (95%)
- 2.33 (98%)
- 2.58 (99%)

Use of these confidence intervals makes some assumptions that you need to ensure you can meet. They are:

- Observations in the validation data set were drawn from the domain independently (e.g. they are independent and identically distributed).
- At least 30 observations were used to evaluate the model.

This is based on some statistics of sampling theory that takes calculating the error of a classifier as a binomial distribution, that we have sufficient observations to approximate a normal distribution for the binomial distribution, and that via the central limit theorem that the more observations we classify, the closer we will get to the true, but unknown, model skill.

## Confidence Interval Example

Consider a model with an error of 0.02 (error = 0.02) on a validation dataset with 50 examples (n = 50).

We can calculate the 95% confidence interval (const = 1.96) as follows:

1 2 3 4 5 |
error +/- const * sqrt( (error * (1 - error)) / n) 0.02 +/- 1.96 * sqrt( (0.02 * (1 - 0.02)) / 50) 0.02 +/- 1.96 * sqrt(0.0196 / 50) 0.02 +/- 1.96 * 0.0197 0.02 +/- 0.0388 |

Or, stated another way:

There is a 95% likelihood that the confidence interval [0.0, 0.0588] covers the true classification error of the model on unseen data.

Notice that the confidence intervals on the classification error must be clipped to the values 0.0 and 1.0. It is impossible to have a negative error (e.g. less than 0.0) or an error more than 1.0.

## Further Reading

- Chapter 5, Machine Learning, 1997
- Binomial proportion confidence interval on Wikipedia
- Confidence Interval on Wikipedia

## Summary

In this post, you discovered how to calculate confidence intervals for your classifier.

Specifically, you learned:

- How to calculate classification accuracy and classification error when reporting results.
- What dataset to use when calculating model skill that is to be reported.
- How to calculate a lower and upper bound on classification error for a chosen level of likelihood.

Do you have any questions about classifier confidence intervals?

Ask your questions in the comments below.

How’s this (confidence interval) differ from F1 score, which is widely used and, IMHO, easier to comprehend, since it’s one score covers both precision and recall.

The F1 is a skill measure for the model. It could be accuracy or anything else.

In this post, we are talking about the confidence (uncertainty) on the calculated skill score.

Hi Jason,

Thank you for the nice post. This error confidence interval that you report corresponds to binary classification only. How about multi-class classification?

Regards

Really great question. I expect you would use logloss or AUC and report confidence on that.

I see,

But then the expression of the confidence interval (for AUC or any other metric) would be different I presume since the process wouldn’t be described using the binomial distribution.

For multi-class classification, wouldn’t the distribution be a multinomial distribution? And in this case the expression for the error confidence interval would change I presume.

Regards

Elie

I see, yes you are correct. I would recommend an empirical approach to summarizing the distribution using the bootstrap method (a post is scheduled).

Hi Jason,

Really good post. But I have a question. Does the classification error differ if we use a different skill – for instance F1-score – for our model?

Thanks

Hi Jonad,

Different measures will evaluate skill in different ways. They will provide different perspectives on the same underlying model error.

Does that make sense?

Yes, I was thinking that the classification error formula ( incorrect predictions / total predictions) might differ depending on the evaluation metrics. Now I understand it better.

Thanks

Great post!

How could I use confidence intervals and cross-validation together?

It’s a tough one, we are generally interested in the variance of model skill during model selection and during the presentation of the final model.

Often standard deviation of CV score is used to capture model skill variance, perhaps that is generally sufficient and we can leave confidence intervals for presenting the final model or specific predictions?

I’m open to better ideas.

Ok, Thanks!

The last question: when I’m using k-fold cv, the value of ‘n’ is equal to the number of all observations or all observations – k?

Hi Jason

Is there R code for calculating the CI and graphing them?

Thanks

I bet there is, I don’t have it on hand, sorry.

The error is just the reverse of the accuracy, wouldn’t that be a simpler statement to make?

This leads to the fundamental problem that accuracy or classification error itself is often mediocre to useless metric because data sets usually are imbalanced. And hence the confidence on that error is just as useless.

I found this post for a different reason as I wanted to find if anyone else does what i do, namely provide metrics grouped by class probability. What is the precision if the model has 0.9 class probability vs 0.6 for example. That can be very useful information for end users because the metric will often vary greatly based on class probability.

Yes, the classification error is the inverse of the classification accuracy.

You can use a different measure to overcome imbalance:

https://machinelearningmastery.com/classification-accuracy-is-not-enough-more-performance-measures-you-can-use/

Thomas, I think I’ve done what you described. I wrote a function to calculate a hand full of different performance metrics at different probability cutoffs and had it stored in a data frame. This helped me choose a probability cutoff that balanced the needs of the business. I can share the code of it’s what your looking for.

Hi Jason,

Nice post. When calculating the confidence interval for error, AUC or other metrics, the standard error of the metric is needed. How should I calculate the standard error?

Great question, here is the equation:

https://en.wikipedia.org/wiki/Standard_error

Thanks for replying. Does this mean I need to get multiple errors by running multiple times (bootstrap or cross-validation) to calculate the standard error?

Yes, if you are looking to calculate the standard error of the bootstrap result distribution.

Hi Jason,

I am trying to group my customers. Say GAP HK, GAP US should be under the group customer GAP.

Few of the customers are already grouped. Say GAP HK is grouped under GAP but GAP US is not.

I am using random forest classifier. I used already grouped customer name as training data. Group customer code is the label that I am trying to predict.

The classifier is assigning labels as expected. The problem I am facing is that the classifier is also assigning labels or group customer code to the customers although the customer name does not match closely with the training data. It is doing the best possible match. It is problem for me because I need to manually ungroup these customers. Can you suggest how to overcome this problem? Is it possible to know classifier correct probability for each predicted label? If yes, then I can ignore the once with low probability.

Thank you in advance for advice.

Perhaps you can predict probabilities instead and only accept the high probability predictions?

No model is perfect, we must expect some error.

https://machinelearningmastery.com/faq/single-faq/why-cant-i-get-100-accuracy-or-zero-error-with-my-model

Nevertheless, these ideas may help you lift the skill of your model:

http://machinelearningmastery.com/machine-learning-performance-improvement-cheat-sheet/

Hi Jason,

I am not sure if anyone else brought this up but I’ve found one issue here. The confidence interval based measure you suggested is not the “Wilson score interval”. according to the Wikipedia page(which is cited in that link). It’s actually “Normal approximation interval” which is above Wilson score paragraph. Correct me if I am wrong.

Thanks

-Anish

Thanks Anish.

Hi Jason,

I’m interested on the relation of Cross Validation and this approach.

With 150 examples I decide to use a 100 repeated 5-fold Cross Validation to understand the behavior of my classifier. At this point I have 100×5 results and I can use the mean and std dev of the error rates to estimate the variance of the model skills:

mean(errorRate) +/- 1.96*(std(errorRate))

I could estimate Confidence Interval of the True Error (that I would obtain on the unseen data) using the the average Error rate:

mean(errorRate) +/- const * sqrt( (mean(errorRate) * (1 – mean(errorRate))) / n)

Two questions:

1. Do you think this approach is correct?

2. Is correct to set n=150 in the second equation or I should use the average number of Test Data used as Test Set in each fold of CV?

You have 5 results from a 5-fold CV. The results are somewhat dependent, they are not iid.

You can use the Gaussian confidence interval, with a grain of salt. You could also use the bootstrap.

I explain more here:

https://machinelearningmastery.com/confidence-intervals-for-machine-learning/

Hi Jason, thanks for the great posts on confidence intervals/ bootstraps for machine learning.

Suppose you use

A) 5-fold CV

B) 30-fold CV

for model evaluation. You pick the final model and train it on all the data at hand.

What are the options one has for reporting on final model skill with a range for uncertainty in each case?

Should one have still held out a number of datapoints for validation+binomial confidence interval?

Is it too late to use the bootstrap confidence intervals as the final model was trained?

Thanks

Not sure I follow your question?

Pick a final model and use a preferred method to report expected performance. It is unrelated to how you chose that model.

Thanks Jason. I found your other post https://machinelearningmastery.com/difference-test-validation-datasets/ very helpful.

Can I confirm that the above procedure of reporting classifier performance with confidence intervals is relevant for the final trained model? If that is so, it seems that the validation dataset mentioned should be called test set to align with the definitions of the linked post?

Yes.

Hi Jason,

Thank you for the post!

In your example you use accuracy and error rate and calculate a confidence interval.

Can one replace “error rate” with, say, precision, recall or f1? Why and why not?

For example, say we have a sample size=50, f1=0.02

Does that mean …

there is a 95% likelihood that the confidence interval [0.0, 0.0588] covers the true F1 of the model on unseen data?

Thanks!

Perhaps for some scores. The example in this example is specific for a ratio. I believe you can use it for other ratios like precision, recall and f1.

Hi Jason

Thank you for your post.

How get the standard error of the AUC curve in python

Not sure I follow. Standard error refers to a statistical quantity on a distribution, not sure how you would calculate it for a curve.

Hello Jason,

I was wondering if I can compute confidence interval for Recall and Precision. If yes can you explain how can I do this?

Thank you so much,

best regards

Lorenzo

Yes, I expect the bootstrap would be a good place to start.