How to Evaluate Gradient Boosting Models with XGBoost in Python

The goal of developing a predictive model is to develop a model that is accurate on unseen data.

This can be achieved using statistical techniques where the training dataset is carefully used to estimate the performance of the model on new and unseen data.

In this tutorial you will discover how you can evaluate the performance of your gradient boosting models with XGBoost in Python.

After completing this tutorial, you will know.

  • How to evaluate the performance of your XGBoost models using train and test datasets.
  • How to evaluate the performance of your XGBoost models using k-fold cross validation.

Let’s get started.

  • Update Jan/2017: Updated to reflect changes in scikit-learn API version 0.18.1.
How to Evaluate Gradient Boosting Models with XGBoost in Python

How to Evaluate Gradient Boosting Models with XGBoost in Python
Photo by Timitrius, some rights reserved.

Don’t Waste Another Day Learning XGBoost
“The Slow Way”

XGBoost With Python Mini-CourseDevelop your first XGBoost TODAY with my free XGBoost-With-Python mini-course.

...with an XG-Boost-With-Python “Cheat Sheet” you can download right now, so you can start learning this award-winning algorithm
“The Fast Way”

Download Your FREE Mini-Course

 

Evaluate XGBoost Models With Train and Test Sets

The simplest method that we can use to evaluate the performance of a machine learning algorithm is to use different training and testing datasets.

We can take our original dataset and split it into two parts. Train the algorithm on the first part, then make predictions on the second part and evaluate the predictions against the expected results.

The size of the split can depend on the size and specifics of your dataset, although it is common to use 67% of the data for training and the remaining 33% for testing.

This algorithm evaluation technique is fast. It is ideal for large datasets (millions of records) where there is strong evidence that both splits of the data are representative of the underlying problem. Because of the speed, it is useful to use this approach when the algorithm you are investigating is slow to train.

A downside of this technique is that it can have a high variance. This means that differences in the training and test dataset can result in meaningful differences in the estimate of model accuracy.

We can split the dataset into a train and test set using the train_test_split() function from the scikit-learn library. For example, we can split the dataset into a 67% and 33% split for training and test sets as follows:

The full code listing is provided below using the Pima Indians onset of diabetes dataset, assumed to be in the current working directory. An XGBoost model with default configuration is fit on the training dataset and evaluated on the test dataset.

Running this example summarizes the performance of the model on the test set.

Evaluate XGBoost Models With k-Fold Cross Validation

Cross validation is an approach that you can use to estimate the performance of a machine learning algorithm with less variance than a single train-test set split.

It works by splitting the dataset into k-parts (e.g. k=5 or k=10). Each split of the data is called a fold. The algorithm is trained on k-1 folds with one held back and tested on the held back fold. This is repeated so that each fold of the dataset is given a chance to be the held back test set.

After running cross validation you end up with k different performance scores that you can summarize using a mean and a standard deviation.

The result is a more reliable estimate of the performance of the algorithm on new data given your test data. It is more accurate because the algorithm is trained and evaluated multiple times on different data.

The choice of k must allow the size of each test partition to be large enough to be a reasonable sample of the problem, whilst allowing enough repetitions of the train-test evaluation of the algorithm to provide a fair estimate of the algorithms performance on unseen data. For modest sized datasets in the thousands or tens of thousands of observations, k values of 3, 5 and 10 are common.

We can use k-fold cross validation support provided in scikit-learn. First we must create the KFold object specifying the number of folds and the size of the dataset. We can then use this scheme with the specific dataset. The cross_val_score() function from scikit-learn allows us to evaluate a model using the cross validation scheme and returns a list of the scores for each model trained on each fold.

The full code listing for evaluating an XGBoost model with k-fold cross validation is provided below for completeness.

Running this example summarizes the performance of the default model configuration on the dataset including both the mean and standard deviation classification accuracy.

If you have many classes for a classification type predictive modeling problem or the classes are imbalanced (there are a lot more instances for one class than another), it can be a good idea to create stratified folds when performing cross validation.

This has the effect of enforcing the same distribution of classes in each fold as in the whole training dataset when performing the cross validation evaluation. The scikit-learn library provides this capability in the StratifiedKFold class.

Below is the same example modified to use stratified cross validation to evaluate an XGBoost model.

Running this example produces the following output.

What Techniques to Use When

  • Generally, k-fold cross validation is the gold-standard for evaluating the performance of a machine learning algorithm on unseen data with k set to 3, 5, or 10.
  • Use stratified cross validation to enforce class distributions when there are a large number of classes or an imbalance in instances for each class.
  • Using a train/test split is good for speed when using a slow algorithm and produces performance estimates with lower bias when using large datasets.

The best advice is to experiment and find a technique for your problem that is fast and produces reasonable estimates of performance that you can use to make decisions.

If in doubt, use 10-fold cross validation for regression problems and stratified 10-fold cross validation on classification problems.

Summary

In this tutorial, you discovered how you can evaluate your XGBoost models by estimating how well they are likely to perform on unseen data.

Specifically, you learned:

  • How to split your dataset into train and test subsets for training and evaluating the performance of your model.
  • How you can create k XGBoost models on different subsets of the dataset and average the scores to get a more robust estimate of model performance.
  • Heuristics to help choose between train-test split and k-fold cross validation for your problem.

Do you have any questions on how to evaluate the performance of XGBoost models or about this post? Ask your questions in the comments below and I will do my best to answer.

Want To Learn The Algorithm Winning Competitions?

Develop Your Own XGBoost Models in Minutes

...with just a few lines of Python

Discover how in my new Ebook: XGBoost With Python

It covers self-study tutorials on topics like:
Algorithm Fundamentals, Scaling, Hyperparameter Tuning, and much more...

Bring The Power of XGBoost To
Your Own Projects

Skip the Academics. Just Results.

Click to learn more.

2 Responses to How to Evaluate Gradient Boosting Models with XGBoost in Python

  1. Agnes January 27, 2017 at 9:07 pm #

    Hi Jason,
    thank you for this article. You didn’t mention the Leave-One-Out cross-validator method.
    Is it the same logic that the k-Fold Cross Validation (exept that the size of the test set is 1) ?
    Would you recommend to use Leave-One-Out cross-validator or k-Fold Cross Validation for a small dataset (approximately 2000 rows) ?
    Regards,
    Agnes

    • Jason Brownlee January 28, 2017 at 7:36 am #

      Hi Agnes,

      Yes, it is like 1-fold cross validation, repeated for every pattern in the dataset.

      From my reading, you are better off using k-fold cross validation.

Leave a Reply