[New Book] Click to get The Beginner's Guide to Data Science!
Use the offer code 20offearlybird to get 20% off. Hurry, sale ends soon!

A Gentle Introduction to XGBoost Loss Functions

XGBoost is a powerful and popular implementation of the gradient boosting ensemble algorithm.

An important aspect in configuring XGBoost models is the choice of loss function that is minimized during the training of the model.

The loss function must be matched to the predictive modeling problem type, in the same way we must choose appropriate loss functions based on problem types with deep learning neural networks.

In this tutorial, you will discover how to configure loss functions for XGBoost ensemble models.

After completing this tutorial, you will know:

  • Specifying loss functions used when training XGBoost ensembles is a critical step, much like neural networks.
  • How to configure XGBoost loss functions for binary and multi-class classification tasks.
  • How to configure XGBoost loss functions for regression predictive modeling tasks.

Let’s get started.

A Gentle Introduction to XGBoost Loss Functions

A Gentle Introduction to XGBoost Loss Functions
Photo by Kevin Rheese, some rights reserved.

Tutorial Overview

This tutorial is divided into three parts; they are:

  1. XGBoost and Loss Functions
  2. XGBoost Loss for Classification
  3. XGBoost Loss for Regression

XGBoost and Loss Functions

Extreme Gradient Boosting, or XGBoost for short, is an efficient open-source implementation of the gradient boosting algorithm. As such, XGBoost is an algorithm, an open-source project, and a Python library.

It was initially developed by Tianqi Chen and was described by Chen and Carlos Guestrin in their 2016 paper titled “XGBoost: A Scalable Tree Boosting System.”

It is designed to be both computationally efficient (e.g. fast to execute) and highly effective, perhaps more effective than other open-source implementations.

XGBoost supports a range of different predictive modeling problems, most notably classification and regression.

XGBoost is trained by minimizing loss of an objective function against a dataset. As such, the choice of loss function is a critical hyperparameter and tied directly to the type of problem being solved, much like deep learning neural networks.

The implementation allows the objective function to be specified via the “objective” hyperparameter, and sensible defaults are used that work for most cases.

Nevertheless, there remains some confusion by beginners as to what loss function to use when training XGBoost models.

We will take a closer look at how to configure the loss function for XGBoost in this tutorial.

Before we get started, let’s get setup.

XGBoost can be installed as a standalone library and an XGBoost model can be developed using the scikit-learn API.

The first step is to install the XGBoost library if it is not already installed. This can be achieved using the pip python package manager on most platforms; for example:

You can then confirm that the XGBoost library was installed correctly and can be used by running the following script.

Running the script will print your version of the XGBoost library you have installed.

Your version should be the same or higher. If not, you must upgrade your version of the XGBoost library.

It is possible that you may have problems with the latest version of the library. It is not your fault.

Sometimes, the most recent version of the library imposes additional requirements or may be less stable.

If you do have errors when trying to run the above script, I recommend downgrading to version 1.0.1 (or lower). This can be achieved by specifying the version to install to the pip command, as follows:

If you see a warning message, you can safely ignore it for now. For example, below is an example of a warning message that you may see and can ignore:

If you require specific instructions for your development environment, see the tutorial:

The XGBoost library has its own custom API, although we will use the method via the scikit-learn wrapper classes: XGBRegressor and XGBClassifier. This will allow us to use the full suite of tools from the scikit-learn machine learning library to prepare data and evaluate models.

Both models operate the same way and take the same arguments that influence how the decision trees are created and added to the ensemble.

For more on how to use the XGBoost API with scikit-learn, see the tutorial:

Next, let’s take a closer look at how to configure the loss function for XGBoost on classification problems.

XGBoost Loss for Classification

Classification tasks involve predicting a label or probability for each possible class, given an input sample.

There are two main types of classification tasks with mutually exclusive labels: binary classification that has two class labels, and multi-class classification that have more than two class labels.

  • Binary Classification: Classification task with two class labels.
  • Multi-Class Classification: Classification task with more than two class labels.

For more on the different types of classification tasks, see the tutorial:

XGBoost provides loss functions for each of these problem types.

It is typical in machine learning to train a model to predict the probability of class membership for probability tasks and if the task requires crisp class labels to post-process the predicted probabilities (e.g. use argmax).

This approach is used when training deep learning neural networks for classification, and is also recommended when using XGBoost for classification.

The loss function used for predicting probabilities for binary classification problems is “binary:logistic” and the loss function for predicting class probabilities for multi-class problems is “multi:softprob“.

  • binary:logistic“: XGBoost loss function for binary classification.
  • multi:softprob“: XGBoost loss function for multi-class classification.

These string values can be specified via the “objective” hyperparameter when configuring your XGBClassifier model.

For example, for binary classification:

And, for multi-class classification:

Importantly, if you do not specify the “objective” hyperparameter, the XGBClassifier will automatically choose one of these loss functions based on the data provided during training.

We can make this concrete with a worked example.

The example below creates a synthetic binary classification dataset, fits an XGBClassifier on the dataset with default hyperparameters, then prints the model objective configuration.

Running the example fits the model on the dataset and prints the loss function configuration.

We can see the model automatically choose a loss function for binary classification.

Alternately, we can specify the objective and fit the model, confirming the loss function was used.

Running the example fits the model on the dataset and prints the loss function configuration.

We can see the model used to specify a loss function for binary classification.

Let’s repeat this example on a dataset with more than two classes. In this case, three classes.

The complete example is listed below.

Running the example fits the model on the dataset and prints the loss function configuration.

We can see the model automatically chose a loss function for multi-class classification.

Alternately, we can manually specify the loss function and confirm it was used to train the model.

Running the example fits the model on the dataset and prints the loss function configuration.

We can see the model used to specify a loss function for multi-class classification.

Finally, there are other loss functions you can use for classification, including: “binary:logitraw” and “binary:hinge” for binary classification and “multi:softmax” for multi-class classification.

You can see a full list here:

Next, let’s take a look at XGBoost loss functions for regression.

XGBoost Loss for Regression

Regression refers to predictive modeling problems where a numerical value is predicted given an input sample.

Although predicting a probability sounds like a regression problem (i.e. a probability is a numerical value), it is generally not considered a regression type predictive modeling problem.

The XGBoost objective function used when predicting numerical values is the “reg:squarederror” loss function.

  • “reg:squarederror”: Loss function for regression predictive modeling problems.

This string value can be specified via the “objective” hyperparameter when configuring your XGBRegressor model.

For example:

Importantly, if you do not specify the “objective” hyperparameter, the XGBRegressor will automatically choose this objective function for you.

We can make this concrete with a worked example.

The example below creates a synthetic regression dataset, fits an XGBRegressor on the dataset, then prints the model objective configuration.

Running the example fits the model on the dataset and prints the loss function configuration.

We can see the model automatically choose a loss function for regression.

Alternately, we can specify the objective and fit the model, confirming the loss function was used.

Running the example fits the model on the dataset and prints the loss function configuration.

We can see the model used the specified a loss function for regression.

Finally, there are other loss functions you can use for regression, including: “reg:squaredlogerror“, “reg:logistic“, “reg:pseudohubererror“, “reg:gamma“, and “reg:tweedie“.

You can see a full list here:

Further Reading

This section provides more resources on the topic if you are looking to go deeper.

Tutorials

APIs

Summary

In this tutorial, you discovered how to configure loss functions for XGBoost ensemble models.

Specifically, you learned:

  • Specifying loss functions used when training XGBoost ensembles is a critical step much like neural networks.
  • How to configure XGBoost loss functions for binary and multi-class classification tasks.
  • How to configure XGBoost loss functions for regression predictive modeling tasks.

Do you have any questions?
Ask your questions in the comments below and I will do my best to answer.

Discover The Algorithm Winning Competitions!

XGBoost With Python

Develop Your Own XGBoost Models in Minutes

...with just a few lines of Python

Discover how in my new Ebook:
XGBoost With Python

It covers self-study tutorials like:
Algorithm Fundamentals, Scaling, Hyperparameters, and much more...

Bring The Power of XGBoost To Your Own Projects

Skip the Academics. Just Results.

See What's Inside

5 Responses to A Gentle Introduction to XGBoost Loss Functions

  1. Avatar
    Anthony The Koala March 26, 2021 at 9:24 am #

    Dear Dr Jason,
    The late>st xgboost version is 1.3.3

    Thank you
    Anthony of Sydney

  2. Avatar
    Mahdi April 13, 2021 at 3:55 pm #

    Hi Jason.

    This is such a great and informative post. A few questions/comments:

    1. In the snippet below, should the first line say “binary:logistic” instead of “multi:logistic”?

    “multi:logistic“: XGBoost loss function for binary classification.
    “multi:softprob“: XGBoost loss function for multi-class classification.

    2. Also, in the snippet below, the second line should read “multi:softprob” instead of “binary:softprob”?

    # define the model for multi-class classification
    model = XGBClassifier(objective=’binary:softprob’)

    Thank you!
    Mahdi

  3. Avatar
    Sergio July 2, 2022 at 8:47 pm #

    Hi, do you use any particular function for binary/multi-class multilabel tasks? Thank you.

Leave a Reply