Data Preparation for Gradient Boosting with XGBoost in Python

XGBoost is a popular implementation of Gradient Boosting because of its speed and performance.

Internally, XGBoost models represent all problems as a regression predictive modeling problem that only takes numerical values as input. If your data is in a different form, it must be prepared into the expected format.

In this post, you will discover how to prepare your data for using with gradient boosting with the XGBoost library in Python.

After reading this post you will know:

  • How to encode string output variables for classification.
  • How to prepare categorical input variables using one hot encoding.
  • How to automatically handle missing data with XGBoost.

Let’s get started.

  • Update Sept/2016: I updated a few small typos in the impute example.
  • Update Jan/2017: Updated to reflect changes in scikit-learn API version 0.18.1.
  • Update Jan/2017: Updated breast cancer example to converted input data to strings.
Data Preparation for Gradient Boosting with XGBoost in Python

Data Preparation for Gradient Boosting with XGBoost in Python
Photo by Ed Dunens, some rights reserved.

Don’t Waste Another Day Learning XGBoost
“The Slow Way”

XGBoost With Python Mini-CourseDevelop your first XGBoost TODAY with my free XGBoost-With-Python mini-course.

...with an XG-Boost-With-Python “Cheat Sheet” you can download right now, so you can start learning this award-winning algorithm
“The Fast Way”

Download Your FREE Mini-Course

 

Label Encode String Class Values

The iris flowers classification problem is an example of a problem that has a string class value.

This is a prediction problem where given measurements of iris flowers in centimeters, the task is to predict to which species a given flower belongs.

Below is a sample of the raw dataset. You can learn more about this dataset and download the raw data in CSV format from the UCI Machine Learning Repository.

XGBoost cannot model this problem as-is as it requires that the output variables be numeric.

We can easily convert the string values to integer values using the LabelEncoder. The three class values (Iris-setosa, Iris-versicolor, Iris-virginica) are mapped to the integer values (0, 1, 2).

We save the label encoder as a separate object so that we can transform both the training and later the test and validation datasets using the same encoding scheme.

Below is a complete example demonstrating how to load the iris dataset. Notice that Pandas is used to load the data in order to handle the string class values.

Running the example produces the following output:

Notice how the XGBoost model is configured to automatically model the multiclass classification problem using the multi:softprob objective, a variation on the softmax loss function to model class probabilities. This suggests that internally, that the output class is converted into a one hot type encoding automatically.

One Hot Encode Categorical Data

Some datasets only contain categorical data, for example the breast cancer dataset.

This dataset describes the technical details of breast cancer biopsies and the prediction task is to predict whether or not the patient has a recurrence of cancer, or not.

Below is a sample of the raw dataset. You can learn more about this dataset at the UCI Machine Learning Repository and download it in CSV format from mldata.org.

We can see that all 9 input variables are categorical and described in string format. The problem is a binary classification prediction problem and the output class values are also described in string format.

We can reuse the same approach from the previous section and convert the string class values to integer values to model the prediction using the LabelEncoder. For example:

We can use this same approach on each input feature in X, but this is only a starting point.

XGBoost may assume that encoded integer values for each input variable have an ordinal relationship. For example that ‘left-up’ encoded as 0 and ‘left-low’ encoded as 1 for the breast-quad variable have a meaningful relationship as integers. In this case, this assumption is untrue.

Instead, we must map these integer values onto new binary variables, one new variable for each categorical value.

For example, the breast-quad variable has the values:

We can model this as 5 binary variables as follows:

This is called one hot encoding. We can one hot encode all of the categorical input variables using the OneHotEncoder class in scikit-learn.

We can one hot encode each feature after we have label encoded it. First we must transform the feature array into a 2-dimensional NumPy array where each integer value is a feature vector with a length 1.

We can then create the OneHotEncoder and encode the feature array.

Finally, we can build up the input dataset by concatenating the one hot encoded features, one by one, adding them on as new columns (axis=2). We end up with an input vector comprised of 43 binary input variables.

Ideally, we may experiment with not one hot encode some of input attributes as we could encode them with an explicit ordinal relationship, for example the first column age with values like ’40-49′ and ’50-59′. This is left as an exercise, if you are interested in extending this example.

Below is the complete example with label and one hot encoded input variables and label encoded output variable.

Running this example we get the following output:

Again we can see that the XGBoost framework chose the ‘binary:logistic‘ objective automatically, the right objective for this binary classification problem.

Support for Missing Data

XGBoost can automatically learn how to best handle missing data.

In fact, XGBoost was designed to work with sparse data, like the one hot encoded data from the previous section, and missing data is handled the same way that sparse or zero values are handled, by minimizing the loss function.

For more information on the technical details for how missing values are handled in XGBoost, see Section 3.4 “Sparsity-aware Split Finding” in the paper XGBoost: A Scalable Tree Boosting System.

The Horse Colic dataset is a good example to demonstrate this capability as it contains a large percentage of missing data, approximately 30%.

You can learn more about the Horse Colic dataset and download the raw data file from the UCI Machine Learning repository.

The values are separated by whitespace and we can easily load it using the Pandas function read_csv.

Once loaded, we can see that the missing data is marked with a question mark character (‘?’). We can change these missing values to the sparse value expected by XGBoost which is the value zero (0).

Because the missing data was marked as strings, those columns with missing data were all loaded as string data types. We can now convert the entire set of input data to numerical values.

Finally, this is a binary classification problem although the class values are marked with the integers 1 and 2. We model binary classification problems in XGBoost as logistic 0 and 1 values. We can easily convert the Y dataset to 0 and 1 integers using the LabelEncoder, as we did in the iris flowers example.

The full code listing is provided below for completeness.

Running this example produces the following output.

We can tease out the effect of XGBoost’s automatic handling of missing values, by marking the missing values with a non-zero value, such as 1.

Re-running the example demonstrates a drop in accuracy for the model.

We can also impute the missing data with a specific value.

It is common to use a mean or a median for the column. We can easily impute the missing data using the scikit-learn Imputer class.

Below is the full example with missing data imputed with the mean value from each column.

Running this example we see results equivalent to the fixing the value to one (1). This suggests that at least in this case we are better off marking the missing values with a distinct value of zero (0) rather than a valid value (1) or an imputed value.

It is a good lesson to try both approaches (automatic handling and imputing) on your data when you have missing values.

Summary

In this post you discovered how you can prepare your machine learning data for gradient boosting with XGBoost in Python.

Specifically, you learned:

  • How to prepare string class values for binary classification using label encoding.
  • How to prepare categorical input variables using a one hot encoding to model them as binary variables.
  • How XGBoost automatically handles missing data and how you can mark and impute missing values.

Do you have any questions about how to prepare your data for XGBoost or about this post? Ask your questions in the comments and I will do my best to answer.

Want To Learn The Algorithm Winning Competitions?

Develop Your Own XGBoost Models in Minutes

...with just a few lines of Python

Discover how in my new Ebook: XGBoost With Python

It covers self-study tutorials on topics like:
Algorithm Fundamentals, Scaling, Hyperparameter Tuning, and much more...

Bring The Power of XGBoost To
Your Own Projects

Skip the Academics. Just Results.

Click to learn more.

12 Responses to Data Preparation for Gradient Boosting with XGBoost in Python

  1. Ralph_adu August 28, 2016 at 1:24 am #

    hi Jason, the train data for the last example should be imputed_x, but you use the original X which has missing data. I tried with data imputed_x and get the accuracy 79.8%

  2. Qichang September 19, 2016 at 9:29 am #

    Hi Jason,

    Thanks for the tutorial with such useful information! I have one question regarding the label encoding and the one hot encoding you applied on the breast cancer dataset.

    You perform label encoding and one hot encoding for the whole dataset and then split into train and test set. This way it can be ensured that all the data are transformed with the same encoding configuration.

    However, if we have new unseen data with the raw dataset type, how can we ensure that label encoding and one hot encoding is still transforming the unseen data in the same way? Do we need to save the encoders for the sake of processing unseen data?

    Thanks in advance!

    • Jason Brownlee September 20, 2016 at 8:28 am #

      Great question Qichang.

      I would prepare the encodings on the training data, store the mappings (or pickle the objects), then reuse the encodings on the test data.

      It means we must be confident that the training data is representative of the data we may need to predict in the future.

  3. Qichang September 20, 2016 at 7:47 pm #

    Thanks Jason for the prompt reply.

    Besides this kinds of data transformation, do we need to consider scaling or normalisation of the input variables before passing to XGBoost? We know that it generally yields better result for SVM especially with kernel function.

    • Jason Brownlee September 21, 2016 at 8:28 am #

      Generally no scaling. You may see some benefit by spreading out a univariate distribution to highlight specific features (e.g. with a square, log, square root, etc.)

  4. JChen October 12, 2016 at 5:26 am #

    Hi Jason, your post is very helpful. Thanks a lot!

    I had a question around how to treat “default” values of continuous predictors for XGBoost. For example, let’s say attributer X may take continuous values as such (say in range of 1 -100). But certain records may have some default value (say 9999) which denotes certain segment of customers for whom that predictor X cannot be calculated or is unavailable. Can we directly use predictor X as input variable for an XGBoost model? Or, should we do some data treatment for X? If so, what would that be?

    TIA

    • Jason Brownlee October 12, 2016 at 9:14 am #

      Great question JChen.

      I would try modeling the data as-is to start with. XGBoost will figure it out.

      You could then try some feature engineering (maybe add a new flag variable for such cases) and see if you can further list performance.

      • JChen October 14, 2016 at 2:37 am #

        Thanks for your reply! This is helpful

  5. Sargam Modak February 9, 2017 at 11:54 pm #

    For your missing data part you replaced ‘?’ with 0. But you have not mentioned while defining XGBClassifier model that in your dataset treat 0 as missing value. And by default ‘missing’ parameter value is none which is equivalent to treating NaN as missing value. So i don’t think your model is handling missing values.

Leave a Reply