Multi-Class Imbalanced Classification

Imbalanced classification are those prediction tasks where the distribution of examples across class labels is not equal.

Most imbalanced classification examples focus on binary classification tasks, yet many of the tools and techniques for imbalanced classification also directly support multi-class classification problems.

In this tutorial, you will discover how to use the tools of imbalanced classification with a multi-class dataset.

After completing this tutorial, you will know:

  • About the glass identification standard imbalanced multi-class prediction problem.
  • How to use SMOTE oversampling for imbalanced multi-class classification.
  • How to use cost-sensitive learning for imbalanced multi-class classification.

Kick-start your project with my new book Imbalanced Classification with Python, including step-by-step tutorials and the Python source code files for all examples.

Let’s get started.

  • Updated Jan/2021: Updated links for API documentation.
Multi-Class Imbalanced Classification

Multi-Class Imbalanced Classification
Photo by istolethetv, some rights reserved.

Tutorial Overview

This tutorial is divided into three parts; they are:

  1. Glass Multi-Class Classification Dataset
  2. SMOTE Oversampling for Multi-Class Classification
  3. Cost-Sensitive Learning for Multi-Class Classification

Glass Multi-Class Classification Dataset

In this tutorial, we will focus on the standard imbalanced multi-class classification problem referred to as “Glass Identification” or simply “glass.”

The dataset describes the chemical properties of glass and involves classifying samples of glass using their chemical properties as one of six classes. The dataset was credited to Vina Spiehler in 1987.

Ignoring the sample identification number, there are nine input variables that summarize the properties of the glass dataset; they are:

  • RI: Refractive Index
  • Na: Sodium
  • Mg: Magnesium
  • Al: Aluminum
  • Si: Silicon
  • K: Potassium
  • Ca: Calcium
  • Ba: Barium
  • Fe: Iron

The chemical compositions are measured as the weight percent in corresponding oxide.

There are seven types of glass listed; they are:

  • Class 1: building windows (float processed)
  • Class 2: building windows (non-float processed)
  • Class 3: vehicle windows (float processed)
  • Class 4: vehicle windows (non-float processed)
  • Class 5: containers
  • Class 6: tableware
  • Class 7: headlamps

Float glass refers to the process used to make the glass.

There are 214 observations in the dataset and the number of observations in each class is imbalanced. Note that there are no examples for class 4 (non-float processed vehicle windows) in the dataset.

  • Class 1: 70 examples
  • Class 2: 76 examples
  • Class 3: 17 examples
  • Class 4: 0 examples
  • Class 5: 13 examples
  • Class 6: 9 examples
  • Class 7: 29 examples

Although there are minority classes, all classes are equally important in this prediction problem.

The dataset can be divided into window glass (classes 1-4) and non-window glass (classes 5-7). There are 163 examples of window glass and 51 examples of non-window glass.

  • Window Glass: 163 examples
  • Non-Window Glass: 51 examples

Another division of the observations would be between float processed glass and non-float processed glass, in the case of window glass only. This division is more balanced.

  • Float Glass: 87 examples
  • Non-Float Glass: 76 examples

You can learn more about the dataset here:

No need to download the dataset; we will download it automatically as part of the worked examples.

Below is a sample of the first few rows of the data.

We can see that all inputs are numeric and the target variable in the final column is the integer encoded class label.

You can learn more about how to work through this dataset as part of a project in the tutorial:

Now that we are familiar with the glass multi-class classification dataset, let’s explore how we can use standard imbalanced classification tools with it.

Want to Get Started With Imbalance Classification?

Take my free 7-day email crash course now (with sample code).

Click to sign-up and also get a free PDF Ebook version of the course.

SMOTE Oversampling for Multi-Class Classification

Oversampling refers to copying or synthesizing new examples of the minority classes so that the number of examples in the minority class better resembles or matches the number of examples in the majority classes.

Perhaps the most widely used approach to synthesizing new examples is called the Synthetic Minority Oversampling TEchnique, or SMOTE for short. This technique was described by Nitesh Chawla, et al. in their 2002 paper named for the technique titled “SMOTE: Synthetic Minority Over-sampling Technique.”

You can learn more about SMOTE in the tutorial:

The imbalanced-learn library provides an implementation of SMOTE that we can use that is compatible with the popular scikit-learn library.

First, the library must be installed. We can install it using pip as follows:

sudo pip install imbalanced-learn

We can confirm that the installation was successful by printing the version of the installed library:

Running the example will print the version number of the installed library; for example:

Before we apply SMOTE, let’s first load the dataset and confirm the number of examples in each class.

Running the example first downloads the dataset and splits it into train and test sets.

The number of rows in each class is then reported, confirming that some classes, such as 0 and 1, have many more examples (more than 70) than other classes, such as 3 and 4 (less than 15).

A bar chart is created providing a visualization of the class breakdown of the dataset.

This gives a clearer idea that classes 0 and 1 have many more examples than classes 2, 3, 4 and 5.

Histogram of Examples in Each Class in the Glass Multi-Class Classification Dataset

Histogram of Examples in Each Class in the Glass Multi-Class Classification Dataset

Next, we can apply SMOTE to oversample the dataset.

By default, SMOTE will oversample all classes to have the same number of examples as the class with the most examples.

In this case, class 1 has the most examples with 76, therefore, SMOTE will oversample all classes to have 76 examples.

The complete example of oversampling the glass dataset with SMOTE is listed below.

Running the example first loads the dataset and applies SMOTE to it.

The distribution of examples in each class is then reported, confirming that each class now has 76 examples, as we expected.

A bar chart of the class distribution is also created, providing a strong visual indication that all classes now have the same number of examples.

Histogram of Examples in Each Class in the Glass Multi-Class Classification Dataset After Default SMOTE Oversampling

Histogram of Examples in Each Class in the Glass Multi-Class Classification Dataset After Default SMOTE Oversampling

Instead of using the default strategy of SMOTE to oversample all classes to the number of examples in the majority class, we could instead specify the number of examples to oversample in each class.

For example, we could oversample to 100 examples in classes 0 and 1 and 200 examples in remaining classes. This can be achieved by creating a dictionary that maps class labels to the number of desired examples in each class, then specifying this via the “sampling_strategy” argument to the SMOTE class.

Tying this together, the complete example of using a custom oversampling strategy for SMOTE is listed below.

Running the example creates the desired sampling and summarizes the effect on the dataset, confirming the intended result.

Note: you may see warnings that can be safely ignored for the purposes of this example, such as:

A bar chart of the class distribution is also created confirming the specified class distribution after data sampling.

Histogram of Examples in Each Class in the Glass Multi-Class Classification Dataset After Custom SMOTE Oversampling

Histogram of Examples in Each Class in the Glass Multi-Class Classification Dataset After Custom SMOTE Oversampling

Note: when using data sampling like SMOTE, it must only be applied to the training dataset, not the entire dataset. I recommend using a Pipeline to ensure that the SMOTE method is correctly used when evaluating models and making predictions with models.

You can see an example of the correct usage of SMOTE in a Pipeline in this tutorial:

Cost-Sensitive Learning for Multi-Class Classification

Most machine learning algorithms assume that all classes have an equal number of examples.

This is not the case in multi-class imbalanced classification. Algorithms can be modified to change the way learning is performed to bias towards those classes that have fewer examples in the training dataset. This is generally called cost-sensitive learning.

For more on cost-sensitive learning, see the tutorial:

The RandomForestClassifier class in scikit-learn supports cost-sensitive learning via the “class_weight” argument.

By default, the random forest class assigns equal weight to each class.

We can evaluate the classification accuracy of the default random forest class weighting on the glass imbalanced multi-class classification dataset.

The complete example is listed below.

Running the example evaluates the default random forest algorithm with 1,000 trees on the glass dataset using repeated stratified k-fold cross-validation.

The mean and standard deviation classification accuracy are reported at the end of the run.

Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.

In this case, we can see that the default model achieved a classification accuracy of about 79.6 percent.

We can specify the “class_weight” argument to the value “balanced” that will automatically calculates a class weighting that will ensure each class gets an equal weighting during the training of the model.

Tying this together, the complete example is listed below.

Running the example reports the mean and standard deviation classification accuracy of the cost-sensitive version of random forest on the glass dataset.

Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.

In this case, we can see that the default model achieved a lift in classification accuracy over the cost-insensitive version of the algorithm, with 80.2 percent classification accuracy vs. 79.6 percent.

The “class_weight” argument takes a dictionary of class labels mapped to a class weighting value.

We can use this to specify a custom weighting, such as a default weighting for classes 0 and 1.0 that have many examples and a double class weighting of 2.0 for the other classes.

Tying this together, the complete example of using a custom class weighting for cost-sensitive learning on the glass multi-class imbalanced classification problem is listed below.

Running the example reports the mean and standard deviation classification accuracy of the cost-sensitive version of random forest on the glass dataset with custom weights.

Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.

In this case, we can see that we achieved a further lift in accuracy from about 80.2 percent with balanced class weighting to 80.8 percent with a more biased class weighting.

Further Reading

This section provides more resources on the topic if you are looking to go deeper.

Related Tutorials

APIs

Summary

In this tutorial, you discovered how to use the tools of imbalanced classification with a multi-class dataset.

Specifically, you learned:

  • About the glass identification standard imbalanced multi-class prediction problem.
  • How to use SMOTE oversampling for imbalanced multi-class classification.
  • How to use cost-sensitive learning for imbalanced multi-class classification.

Do you have any questions?
Ask your questions in the comments below and I will do my best to answer.

Get a Handle on Imbalanced Classification!

Imbalanced Classification with Python

Develop Imbalanced Learning Models in Minutes

...with just a few lines of python code

Discover how in my new Ebook:
Imbalanced Classification with Python

It provides self-study tutorials and end-to-end projects on:
Performance Metrics, Undersampling Methods, SMOTE, Threshold Moving, Probability Calibration, Cost-Sensitive Algorithms
and much more...

Bring Imbalanced Classification Methods to Your Machine Learning Projects

See What's Inside

63 Responses to Multi-Class Imbalanced Classification

  1. Avatar
    Arka August 7, 2020 at 11:11 pm #

    Hello Jason, thanks for the excellent article. My question is to what extent should oversampling be done as a rule of thumb? I have a dataset of 6 classes with the number of examples as following (approx.): [10000, 1000, 12000, 8000, 400, 6000]. So is it okay to oversample the classes with 400, 1000 examples to 12000 level?

    • Avatar
      Jason Brownlee August 8, 2020 at 6:01 am #

      I would suggest you try it – just like any method, then use what works best for your specific dataset.

  2. Avatar
    marco August 8, 2020 at 4:53 am #

    Hello Jason,
    I’ve found a useful map at https://scikit-learn.org/stable/tutorial/machine_learning_map/index.html (but it is only for scikit learn).
    Did you make a map (or mind map) that helps to chose the right classifier/ regressor? (or do you advice where I can find it?)
    Thanks,
    Marco

  3. Avatar
    marco August 9, 2020 at 5:46 am #

    Hello Jasonn,
    I’ve seen it is possible to use XGBoost for Time Series.
    Where also is possibile to apply XGBoost (i.e what are typical applications of XGBoost)?
    Is time serie a kind of regression?
    Thanks,
    Marco

    • Avatar
      Jason Brownlee August 9, 2020 at 5:47 am #

      Yes, time series can be framed as a regression supervised learning problem.

      XGBoost can be used for regression and classification and many prediction tasks can be reduced this problems.

  4. Avatar
    armin September 16, 2020 at 5:42 am #

    Hi.Thank you very much due to your excellent tutorial.I wish the best for you

  5. Avatar
    Saeed Ullah October 1, 2020 at 4:36 am #

    Good morning Sir!
    Hopefully you will be fine with good health. Sir i face a problem in machine learning classifier training and testing and this problem is due to dataset. i use UNSW-NB15 dataset which is attack dataset. there is two feature one is attack category and the other is Label feature. Now i am confuse that how to use this two feature in machine learning classifier as a Label class. Kindly quide me thanks in advance.

  6. Avatar
    jo October 12, 2020 at 8:33 pm #

    Hi Jason,

    Is it possible to apply SMOTE on Multitargets problem?

    • Avatar
      Jason Brownlee October 13, 2020 at 6:34 am #

      Perhaps. I don’t know offhand. Try it and see.

  7. Avatar
    Jan December 9, 2020 at 3:39 am #

    Hi Jason, interesting article. I didn’t find any mentioning about text data. I assume this would work as with any form of vectorised data, right?
    Do you have made any experiences with that so far or would you suggest another lib or approach for imbalanced textual data?

  8. Avatar
    Andreas January 13, 2021 at 1:08 am #

    Hi Jason,

    Very good article!
    Out of your experience, what is better, oversampling or cost-sensitive learning?
    Certainly oversampling gives you control over the sampling method. But beside that?

    Moreover,

    • Avatar
      Jason Brownlee January 13, 2021 at 6:17 am #

      It depends on the dataset, you must discover what works well for your dataset with controlled experiments.

  9. Avatar
    Dipanjan February 3, 2021 at 5:01 pm #

    Great article as always!
    Can you please advise if it is necessary to take care of imbalance if I am modeling intents of text (say 50 intent labels from 16000 texts) where intents have imbalanced frequencies? What happens if I dont treat imbalance?

    • Avatar
      Jason Brownlee February 4, 2021 at 6:15 am #

      It really depends on the specifics of your project and project goals.

      Perhaps try some of the techniques and see if it makes a difference to your model performance for your chosen metric.

  10. Avatar
    Jessy February 13, 2021 at 4:30 am #

    Great article as usual!
    Can we apply the same to ADAYSN

    <>

  11. Avatar
    Naresh May 24, 2021 at 12:53 am #

    It was really helpful!

  12. Avatar
    ismael hassane May 27, 2021 at 2:48 pm #

    Hello Jason ,

    I hope your are fine ?
    Thanks you for this article , it is very helpful and clearly simple.
    Actually , i work in ML porject, particularly in NLP project with mutiple class imbalanced but also a small dataset. I have 465 class for only 1050 examples and 3 columns which are a list of few word (not structured sentence or paragraph ). This is very small as is said. Do you know how i can resolve this problem please? . How i can apply smote method for text data ? Do you know an api or library to add external corpus to my corpus ?
    Do you please have some articles or ideas you suggeste me.

    Thanks you a lot in advance

    • Avatar
      Jason Brownlee May 28, 2021 at 6:44 am #

      No, SMOTE is not appropriate for text.

      Perhaps you can explore using a generative model to create more text data, e.g. language models.

  13. Avatar
    ismael hassane June 1, 2021 at 5:51 am #

    Thanks you very lot Jason. I try to regroup all small categories and take high categories frequencie, this give me good accuracy ffor the moment.

  14. Avatar
    Nour July 1, 2021 at 7:17 am #

    smote applied at train set only, but in this article applied on x and y , is this true?

  15. Avatar
    Lu July 6, 2021 at 1:35 pm #

    Hi,Jason, I want to know how to caculate the cost matrix or class_weight in my project?
    And what do you think can I need use the cost_sensitive learning with feature selection before XGboost? I was going to use dataset balanced and feature selection before XGboost.

    Look forward to your answer. Thanks you a lot in advance

  16. Avatar
    Nazim Uddin Niaz August 10, 2021 at 4:33 am #

    is it possible to use any single option like smote to use for both binary and multi-class problems in a single code base?
    like we will give the dataset and algorithm will realize is it binary or multi class problem and then predict according to them.

    • Avatar
      Jason Brownlee August 10, 2021 at 5:32 am #

      Not sure I understand, sorry. Each dataset is a different “project”.

  17. Avatar
    Michael September 20, 2021 at 10:23 pm #

    Hi, Jason, would confusion matrix help to see the comparison of balanced data with SMOTE and imbalanced data? If yes how should I apply the confusion matrix before SMOTE oversampling?

    • Avatar
      Adrian Tam September 21, 2021 at 9:27 am #

      Note that confusion matrix tells how your model output related to the truth in the training data. Therefore, you cannot. You need to apply the data to train one model with SMOTE and one without, and then you get two confusion matrices to compare.

      • Avatar
        Michael September 21, 2021 at 9:56 am #

        Thank you for the reply! how can I get the two confusion matrices to compare? Would there be any tutorials from your post using confusion matrices? : “You need to apply the data to train one model with SMOTE and one without” I would love to try this out on my own and compare two confusion matrices.

      • Avatar
        Michael September 21, 2021 at 2:38 pm #

        Hi Adrian, Sorry for keep asking questions. Do I need to split the data first to use a confusion matrix for this dataset?

        As I’m a beginner with coding I’m not sure how I could split this:

        “# split into input and output elements
        X, y = data[:, :-1], data[:, -1]”

        this part to “confusion_matrix(y_true=y_test, y_pred=y_pred_single)”

        • Avatar
          Adrian Tam September 23, 2021 at 3:01 am #

          If your “y” is entire data, you will need to split it into training and test sets, look for “train_test_split” function in scikit-learn, for example.

          • Avatar
            Michael September 23, 2021 at 12:58 pm #

            Hi Adrian! thanks for the reply again! Would the dataset that Jason used for this article which is the (glass dataset )be able to split as well to train_test_split?

            Regards

          • Avatar
            Adrian Tam September 24, 2021 at 3:12 am #

            Yes, that train_test_split function is quite flexible and powerful.

  18. Avatar
    Ashok September 24, 2021 at 1:03 am #

    Hi @Jason Brownlee – I have a dataset something like below.

    FeatureA Target
    a,b,c,d xyz

    Multiple categorical values in FeatureA with comma separated and I need to predict the Target categorical value.

    I have few questions:
    1. How to plot the graphs on this data?
    2. What is best way to resolve imbalance issues.
    3. How to plot accuracy graphs like CM, F1, Precision and Recall.

    Thanks in Advance.

    • Avatar
      Adrian Tam September 24, 2021 at 4:07 am #

      (1) If you use categorial value to plot categorical value, I think you can simply do the count of a=value1 to xyz=value1, then you get a matrix of count.
      (2) did you tried SMOTE?
      (3) If target is binary, I believe the most common graph is ROC

      • Avatar
        Ashok September 24, 2021 at 6:04 am #

        Thank you so much Adrian for your quick response.

        1. a,b,c,d are in a single feature, not in a different feature. In that case, do I need to separate it as different features?
        2. Yes, I have tried SMOTENC which is for categorical and numerical, but I do not have numerical features in this dataset. Any other suggestions for oversampling the data?
        3. My target is categorical again, not binary.

        Appreciate your advice.

        • Avatar
          Adrian Tam September 24, 2021 at 9:43 am #

          (1) Same. You have 4 different possible values in the feature, and N different class in the target, then you have a table of 4xN
          (2) If you have no numerical values in input, SMOTE is not suitable. Maybe you can just do a bootstrap resampling: https://machinelearningmastery.com/a-gentle-introduction-to-the-bootstrap-method/
          (3) You can consider mean-F1 as the score function

          • Avatar
            Ashok October 2, 2021 at 7:58 am #

            @Adrian,
            Thank you for your suggestion.

            I have used SMOTEN and resampled the imbalanced classes and now train, test, and cross-validation accuracies are high(around 99%), but when I tested with the unseen data most of the times model is unable to predict. Could you pls help me what is the issue with my model training?

            Regrads,
            Ashok.

          • Avatar
            Adrian Tam October 6, 2021 at 7:24 am #

            Unable to predict means predicted the opposite class? I would go back to check your model and how you handled the data in the training. The always high 99% accuracy maybe not a realistic result in most cases.

  19. Avatar
    Tim September 29, 2021 at 5:26 pm #

    Is there a reason for not using any classifier before balancing the data with SMOTE oversampling?

    • Avatar
      Adrian Tam September 30, 2021 at 1:29 am #

      Can you elaborate on how do you think the classifier can be used?

  20. Avatar
    Ebraheem Farea November 17, 2021 at 12:17 am #

    Hi Jason,

    It’s a great article
    Is it possible to apply SMOTE to computer vision systems?

    because when I tried to use SMOTE with images and pass my images dataset to fit_resample(X, y) function it’s not working, an error appeared to me which is ” the fit_resample function Found array with dim 3. Estimator expected <= 2)

    if the fit_resample(X, y) accepts a 3-dimensional array, please tell me how to do it?

    • Avatar
      Adrian Tam November 17, 2021 at 6:58 am #

      SMOTE is about a vector space model, which a data point is represented by a coordinate (x1,x2,…,xn). If your computer vision system can recognize input in this format, I believe SMOTE can work too. But generally converting a 2D pixels (i.e., images) into 1D vector does not work well.

  21. Avatar
    Ebraheem Farea November 27, 2021 at 4:40 am #

    well, that’s clear now.
    thank you so much, sir.

  22. Avatar
    Goe January 28, 2022 at 9:32 pm #

    Hello sir! How do I apply SMOTE to my dataset with 4 classes (none(2552),ischemia(227),both(621),infection(2555))?

    • Avatar
      James Carmichael January 31, 2022 at 11:06 am #

      Hi Goe…Please specify what you are attempting to accomplish as it relates to the code examples provided so that I may better assist you.

      • Avatar
        Goe March 17, 2022 at 4:09 pm #

        Actually i want to apply SMOTE to my image dataset which contains 5955 images with 4 classes (2552, 227, 621, 2555). Could anyone please help me. It would be greatly appreciated!
        I appreciate your help in advance

        • Avatar
          James Carmichael March 20, 2022 at 7:35 am #

          Hi Goe…Have you tried an a model implementation? If so, please any error messages or results that were not expected so that we may better assist you.

  23. Avatar
    Sruthi June 14, 2022 at 4:07 pm #

    Why not use Randon Stratified Sampling instead?

    • Avatar
      James Carmichael June 15, 2022 at 7:21 am #

      Hi Sruthi…That is certainly an option. Feel free to implement it and let us know your findings.

  24. Avatar
    Michele August 1, 2022 at 9:27 am #

    Hi,

    very nice work. Thank you, as usual, for covering so many useful topics in your articles
    I would ask you the following questions: in my dataset the 3-class multiclass target has the class proportions = [ 1, 16, 83] If I try the Cost-Sensitive Learning strategy with class weights end the metrics are good, should I also try SMOTE due to the very low proportion ( 1% ) of the most minority class?

    Thank you

  25. Avatar
    Jess April 7, 2023 at 11:47 am #

    Hi! Thank you so much for the explanation, it helps a lot.

    I’d ask some following questions:

    I am trying to apply cost-sensitive learning to the Random Forest classifier for my multi-class imbalanced dataset. Is the weight on the code is the same with the cost matrix? I can see that you give 1.0 to majority classes and 2.0 to others. Is there any base or references to decide what value of weight should I assign to each class?

    Hope this question finds you well. Thank you so much.

  26. Avatar
    sagar N.R May 27, 2023 at 3:59 pm #

    hi jason your training and testing data’s are same so accuracy could be good . can u please split the data as train and test after prediction show some classification report so that we can find how the models are predicting the other classes

    • Avatar
      James Carmichael May 28, 2023 at 6:06 am #

      Hi Sagar N.R….Thank you for the recommendation!

Leave a Reply