# Cost-Sensitive Logistic Regression for Imbalanced Classification

Last Updated on

Logistic regression does not support imbalanced classification directly.

Instead, the training algorithm used to fit the logistic regression model must be modified to take the skewed distribution into account. This can be achieved by specifying a class weighting configuration that is used to influence the amount that logistic regression coefficients are updated during training.

The weighting can penalize the model less for errors made on examples from the majority class and penalize the model more for errors made on examples from the minority class. The result is a version of logistic regression that performs better on imbalanced classification tasks, generally referred to as cost-sensitive or weighted logistic regression.

In this tutorial, you will discover cost-sensitive logistic regression for imbalanced classification.

After completing this tutorial, you will know:

• How standard logistic regression does not support imbalanced classification.
• How logistic regression can be modified to weight model error by class weight when fitting the coefficients.
• How to configure class weight for logistic regression and how to grid search different class weight configurations.

Discover SMOTE, one-class classification, cost-sensitive learning, threshold moving, and much more in my new book, with 30 step-by-step tutorials and full Python source code.

Let’s get started.

• Update Feb/2020: Fixed typo in weight calculation. Cost-Sensitive Logistic Regression for Imbalanced Classification
Photo by Naval S, some rights reserved.

## Tutorial Overview

This tutorial is divided into five parts; they are:

1. Imbalanced Classification Dataset
2. Logistic Regression for Imbalanced Classification
3. Weighted Logistic Regression With Scikit-Learn
4. Grid Search Weighted Logistic Regression

## Imbalanced Classification Dataset

Before we dive into the modification of logistic regression for imbalanced classification, let’s first define an imbalanced classification dataset.

We can use the make_classification() function to define a synthetic imbalanced two-class classification dataset. We will generate 10,000 examples with an approximate 1:100 minority to majority class ratio.

Once generated, we can summarize the class distribution to confirm that the dataset was created as we expected.

Finally, we can create a scatter plot of the examples and color them by class label to help understand the challenge of classifying examples from this dataset.

Tying this together, the complete example of generating the synthetic dataset and plotting the examples is listed below.

Running the example first creates the dataset and summarizes the class distribution.

We can see that the dataset has an approximate 1:100 class distribution with a little less than 10,000 examples in the majority class and 100 in the minority class.

Next, a scatter plot of the dataset is created showing the large mass of examples for the majority class (blue) and a small number of examples for the minority class (orange), with some modest class overlap. Scatter Plot of Binary Classification Dataset With 1 to 100 Class Imbalance

Next, we can fit a standard logistic regression model on the dataset.

We will use repeated cross-validation to evaluate the model, with three repeats of 10-fold cross-validation. The mode performance will be reported using the mean ROC area under curve (ROC AUC) averaged over repeats and all folds.

Tying this together, the complete example of evaluated standard logistic regression on the imbalanced classification problem is listed below.

Running the example evaluates the standard logistic regression model on the imbalanced dataset and reports the mean ROC AUC.

We can see that the model has skill, achieving a ROC AUC above 0.5, in this case achieving a mean score of 0.985.

This provides a baseline for comparison for any modifications performed to the standard logistic regression algorithm.

### Want to Get Started With Imbalance Classification?

Take my free 7-day email crash course now (with sample code).

Click to sign-up and also get a free PDF Ebook version of the course.

## Logistic Regression for Imbalanced Classification

Logistic regression is an effective model for binary classification tasks, although by default, it is not effective at imbalanced classification.

Logistic regression can be modified to be better suited for logistic regression.

The coefficients of the logistic regression algorithm are fit using an optimization algorithm that minimizes the negative log likelihood (loss) for the model on the training dataset.

• minimize sum i to n -(log(yhat_i) * y_i + log(1 – yhat_i) * (1 – y_i))

This involves the repeated use of the model to make predictions followed by an adaptation of the coefficients in a direction that reduces the loss of the model.

The calculation of the loss for a given set of coefficients can be modified to take the class balance into account.

By default, the errors for each class may be considered to have the same weighting, say 1.0. These weightings can be adjusted based on the importance of each class.

• minimize sum i to n -(w0 * log(yhat_i) * y_i + w1 * log(1 – yhat_i) * (1 – y_i))

The weighting is applied to the loss so that smaller weight values result in a smaller error value, and in turn, less update to the model coefficients. A larger weight value results in a larger error calculation, and in turn, more update to the model coefficients.

• Small Weight: Less importance, less update to the model coefficients.
• Large Weight: More importance, more update to the model coefficients.

As such, the modified version of logistic regression is referred to as Weighted Logistic Regression, Class-Weighted Logistic Regression or Cost-Sensitive Logistic Regression.

The weightings are sometimes referred to as importance weightings.

Although straightforward to implement, the challenge of weighted logistic regression is the choice of the weighting to use for each class.

## Weighted Logistic Regression with Scikit-Learn

The scikit-learn Python machine learning library provides an implementation of logistic regression that supports class weighting.

The LogisticRegression class provides the class_weight argument that can be specified as a model hyperparameter. The class_weight is a dictionary that defines each class label (e.g. 0 and 1) and the weighting to apply in the calculation of the negative log likelihood when fitting the model.

For example, a 1 to 1 weighting for each class 0 and 1 can be defined as follows:

The class weighing can be defined multiple ways; for example:

• Domain expertise, determined by talking to subject matter experts.
• Tuning, determined by a hyperparameter search such as a grid search.
• Heuristic, specified using a general best practice.

A best practice for using the class weighting is to use the inverse of the class distribution present in the training dataset.

For example, the class distribution of the test dataset is a 1:100 ratio for the minority class to the majority class. The inversion of this ratio could be used with 1 for the majority class and 100 for the minority class; for example:

We might also define the same ratio using fractions and achieve the same result; for example:

We can evaluate the logistic regression algorithm with a class weighting using the same evaluation procedure defined in the previous section.

We would expect that the class-weighted version of logistic regression to perform better than the standard version of logistic regression without any class weighting.

The complete example is listed below.

Running the example prepares the synthetic imbalanced classification dataset, then evaluates the class-weighted version of logistic regression using repeated cross-validation.

The mean ROC AUC score is reported, in this case showing a better score than the unweighted version of logistic regression, 0.989 as compared to 0.985.

The scikit-learn library provides an implementation of the best practice heuristic for the class weighting.

It is implemented via the compute_class_weight() function and is calculated as:

• n_samples / (n_classes * n_samples_with_class)

We can test this calculation manually on our dataset. For example, we have 10,000 examples in the dataset, 9900 in class 0, and 100 in class 1.

The weighting for class 0 is calculated as:

• weighting = n_samples / (n_classes * n_samples_with_class)
• weighting = 10000 / (2 * 9900)
• weighting = 10000 / 19800
• weighting = 0.05

The weighting for class 1 is calculated as:

• weighting = n_samples / (n_classes * n_samples_with_class)
• weighting = 10000 / (2 * 100)
• weighting = 10000 / 200
• weighting = 50

We can confirm these calculations by calling the compute_class_weight() function and specifying the class_weight as “balanced.” For example:

Running the example, we can see that we can achieve a weighting of about 0.5 for class 0 and a weighting of 50 for class 1.

These values match our manual calculation.

The values also match our heuristic calculation above for inverting the ratio of the class distribution in the training dataset; for example:

• 0.5:50 == 1:100

We can use the default class balance directly with the LogisticRegression class by setting the class_weight argument to ‘balanced.’ For example:

The complete example is listed below.

Running the example gives the same mean ROC AUC as we achieved by specifying the inverse class ratio manually.

## Grid Search Weighted Logistic Regression

Using a class weighting that is the inverse ratio of the training data is just a heuristic.

It is possible that better performance can be achieved with a different class weighting, and this too will depend on the choice of performance metric used to evaluate the model.

In this section, we will grid search a range of different class weightings for weighted logistic regression and discover which results in the best ROC AUC score.

We will try the following weightings for class 0 and 1:

• {0:100,1:1}
• {0:10,1:1}
• {0:1,1:1}
• {0:1,1:10}
• {0:1,1:100}

These can be defined as grid search parameters for the GridSearchCV class as follows:

We can perform the grid search on these parameters using repeated cross-validation and estimate model performance using ROC AUC:

Once executed, we can summarize the best configuration as well as all of the results as follows:

Tying this together, the example below grid searches five different class weights for logistic regression on the imbalanced dataset.

We might expect that the heuristic class weighing is the best performing configuration.

Running the example evaluates each class weighting using repeated k-fold cross-validation and reports the best configuration and the associated mean ROC AUC score.

In this case, we can see that the 1:100 majority to minority class weighting achieved the best mean ROC score. This matches the configuration for the general heuristic.

It might be interesting to explore even more severe class weightings to see their effect on the mean ROC AUC score.

This section provides more resources on the topic if you are looking to go deeper.

## Summary

In this tutorial, you discovered cost-sensitive logistic regression for imbalanced classification.

Specifically, you learned:

• How standard logistic regression does not support imbalanced classification.
• How logistic regression can be modified to weight model error by class weight when fitting the coefficients.
• How to configure class weight for logistic regression and how to grid search different class weight configurations.

Do you have any questions?

## Get a Handle on Imbalanced Classification! #### Develop Imbalanced Learning Models in Minutes

...with just a few lines of python code

Discover how in my new Ebook:
Imbalanced Classification with Python

It provides self-study tutorials and end-to-end projects on:
Performance Metrics, Undersampling Methods, SMOTE, Threshold Moving, Probability Calibration, Cost-Sensitive Algorithms
and much more...

### 20 Responses to Cost-Sensitive Logistic Regression for Imbalanced Classification

1. Elie January 27, 2020 at 7:11 am #

Jason, almost done from reading the book.

Really great piece of work!

One minor recommendation: I’d like to see more explanatory infographics of the algos.

• Jason Brownlee January 27, 2020 at 7:40 am #

Great suggestion, thanks.

2. Rodney Silva January 27, 2020 at 10:18 am #

I would like to see the improvement on the precision and recall for the minor class.

• Jason Brownlee January 27, 2020 at 2:33 pm #

Thanks, great suggestion.

3. marco January 29, 2020 at 2:42 am #

Hello Jason,
a question about SVC and linearSVC.
What is the difference?
I’m trying a sentiment analysis with 1000 observations (750 training + 250 test).
Is it better to use SVC o linearSVC for the analysis?
What is the meaning of C hyperparameter in linearSVC (in simple words)?
I’ve found the C parameter is common also in other algorithms. The meaning is the same?
Thanks

• Jason Brownlee January 29, 2020 at 6:45 am #

SVC can do linear svc via a linear kernel. linear svc just uses the linear kernel and nothing else and is optimized for this use case – faster/more efficient.

4. Diane Halliwell January 30, 2020 at 6:58 am #

Hi

Does you book cover example-dependent cost-sensitive classification?

• Jason Brownlee January 30, 2020 at 6:59 am #

No, just classes-based costs.

5. Temitope Mamukuyomi January 31, 2020 at 8:22 am #

Thanks for the tutorial

• Jason Brownlee January 31, 2020 at 2:04 pm #

You’re very welcome, I hope it helps you with your project!

6. macilane manjate January 31, 2020 at 3:43 pm #

Dear Jason Browniee,
Good morning.
Whren will you have the book in R.
Kind regards,
Macilane

• Jason Brownlee February 1, 2020 at 5:46 am #

No plans at this stage. My focus is Python given that it is the most popular language for machine learning at the moment.

7. Sergio Garcia Garcia January 31, 2020 at 7:53 pm #

“For example, we have 10,000 examples in the dataset, 9990 in class 0, and 100 in class 1.”

Wouldn’t be 9990-10 or 9900-100?

Good article, clear explanation

• Jason Brownlee February 1, 2020 at 5:53 am #

Thanks, fixed!

8. Carlos February 7, 2020 at 3:20 am #

Could you please provide the code for “Counter”
NameError: name ‘Counter’ is not defined
Thank you

• Jason Brownlee February 7, 2020 at 8:24 am #

You must copy the full code example that includes the important statement.

9. Atefeh April 12, 2020 at 4:19 am #

Thank you for the great work. It was very helpful. I was looking to see how we can approach a specific problem both with classification and regression models? Let’s say I want to study the air temperature-soil temperature relation both with classification and regression models to get both the best fit and the decision boundary. Is there any article that can illustrate how steps will be different for these two approaches?

• Jason Brownlee April 12, 2020 at 6:25 am #

You’re welcome.

I don’t have an example, sorry.

10. John Sammut May 9, 2020 at 7:03 am #

Hello Jason,

Thanks for another great article.

I am assuming that the alternative option of combining over-sampling and under-sampling the dataset (instead of using the class weight) also applies to Logistic Regression.

Am I correct?

If I’m correct, would you recommend taking both approaches and compare results when faced with an imbalanced dataset?

Thank you.

• Jason Brownlee May 9, 2020 at 1:46 pm #

Yes, you can resample the data instead of using a cost-sensitive classifier.

No, combining both approaches would not be helpful as the classes would be balanced after resampling. Nevertheless, experiment!!! Perhaps you will discover something unintuitive?