SALE! Use code blackfriday for 40% off everything!
Hurry, sale ends soon! Click to see the full catalog.

Multi-Label Classification with Deep Learning

Last Updated on August 31, 2020

Multi-label classification involves predicting zero or more class labels.

Unlike normal classification tasks where class labels are mutually exclusive, multi-label classification requires specialized machine learning algorithms that support predicting multiple mutually non-exclusive classes or “labels.”

Deep learning neural networks are an example of an algorithm that natively supports multi-label classification problems. Neural network models for multi-label classification tasks can be easily defined and evaluated using the Keras deep learning library.

In this tutorial, you will discover how to develop deep learning models for multi-label classification.

After completing this tutorial, you will know:

  • Multi-label classification is a predictive modeling task that involves predicting zero or more mutually non-exclusive class labels.
  • Neural network models can be configured for multi-label classification tasks.
  • How to evaluate a neural network for multi-label classification and make a prediction for new data.

Let’s get started.

Multi-Label Classification with Deep Learning

Multi-Label Classification with Deep Learning
Photo by Trevor Marron, some rights reserved.

Tutorial Overview

This tutorial is divided into three parts; they are:

  • Multi-Label Classification
  • Neural Networks for Multiple Labels
  • Neural Network for Multi-Label Classification

Multi-Label Classification

Classification is a predictive modeling problem that involves outputting a class label given some input

It is different from regression tasks that involve predicting a numeric value.

Typically, a classification task involves predicting a single label. Alternately, it might involve predicting the likelihood across two or more class labels. In these cases, the classes are mutually exclusive, meaning the classification task assumes that the input belongs to one class only.

Some classification tasks require predicting more than one class label. This means that class labels or class membership are not mutually exclusive. These tasks are referred to as multiple label classification, or multi-label classification for short.

In multi-label classification, zero or more labels are required as output for each input sample, and the outputs are required simultaneously. The assumption is that the output labels are a function of the inputs.

We can create a synthetic multi-label classification dataset using the make_multilabel_classification() function in the scikit-learn library.

Our dataset will have 1,000 samples with 10 input features. The dataset will have three class label outputs for each sample and each class will have one or two values (0 or 1, e.g. present or not present).

The complete example of creating and summarizing the synthetic multi-label classification dataset is listed below.

Running the example creates the dataset and summarizes the shape of the input and output elements.

We can see that, as expected, there are 1,000 samples, each with 10 input features and three output features.

The first 10 rows of inputs and outputs are summarized and we can see that all inputs for this dataset are numeric and that output class labels have 0 or 1 values for each of the three class labels.

Next, let’s look at how we can develop neural network models for multi-label classification tasks.

Neural Networks for Multiple Labels

Some machine learning algorithms support multi-label classification natively.

Neural network models can be configured to support multi-label classification and can perform well, depending on the specifics of the classification task.

Multi-label classification can be supported directly by neural networks simply by specifying the number of target labels there is in the problem as the number of nodes in the output layer. For example, a task that has three output labels (classes) will require a neural network output layer with three nodes in the output layer.

Each node in the output layer must use the sigmoid activation. This will predict a probability of class membership for the label, a value between 0 and 1. Finally, the model must be fit with the binary cross-entropy loss function.

In summary, to configure a neural network model for multi-label classification, the specifics are:

  • Number of nodes in the output layer matches the number of labels.
  • Sigmoid activation for each node in the output layer.
  • Binary cross-entropy loss function.

We can demonstrate this using the Keras deep learning library.

We will define a Multilayer Perceptron (MLP) model for the multi-label classification task defined in the previous section.

Each sample has 10 inputs and three outputs; therefore, the network requires an input layer that expects 10 inputs specified via the “input_dim” argument in the first hidden layer and three nodes in the output layer.

We will use the popular ReLU activation function in the hidden layer. The hidden layer has 20 nodes that were chosen after some trial and error. We will fit the model using binary cross-entropy loss and the Adam version of stochastic gradient descent.

The definition of the network for the multi-label classification task is listed below.

You may want to adapt this model for your own multi-label classification task; therefore, we can create a function to define and return the model where the number of input and output variables is provided as arguments.

Now that we are familiar with how to define an MLP for multi-label classification, let’s explore how this model can be evaluated.

Neural Network for Multi-Label Classification

If the dataset is small, it is good practice to evaluate neural network models repeatedly on the same dataset and report the mean performance across the repeats.

This is because of the stochastic nature of the learning algorithm.

Additionally, it is good practice to use k-fold cross-validation instead of train/test splits of a dataset to get an unbiased estimate of model performance when making predictions on new data. Again, only if there is not too much data that the process can be completed in a reasonable time.

Taking this into account, we will evaluate the MLP model on the multi-output regression task using repeated k-fold cross-validation with 10 folds and three repeats.

The MLP model will predict the probability for each class label by default. This means it will predict three probabilities for each sample. These can be converted to crisp class labels by rounding the values to either 0 or 1. We can then calculate the classification accuracy for the crisp class labels.

The scores are collected and can be summarized by reporting the mean and standard deviation across all repeats and cross-validation folds.

The evaluate_model() function below takes the dataset, evaluates the model, and returns a list of evaluation scores, in this case, accuracy scores.

We can then load our dataset and evaluate the model and report the mean performance.

Tying this together, the complete example is listed below.

Running the example reports the classification accuracy for each fold and each repeat, to give an idea of the evaluation progress.

Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.

At the end, the mean and standard deviation accuracy is reported. In this case, the model is shown to achieve an accuracy of about 81.2 percent.

You can use this code as a template for evaluating MLP models on your own multi-label classification tasks. The number of nodes and layers in the model can easily be adapted and tailored to the complexity of your dataset.

Once a model configuration is chosen, we can use it to fit a final model on all available data and make a prediction for new data.

The example below demonstrates this by first fitting the MLP model on the entire multi-label classification dataset, then calling the predict() function on the saved model in order to make a prediction for a new row of data.

Running the example fits the model and makes a prediction for a new row. As expected, the prediction contains three output variables required for the multi-label classification task: the probabilities of each class label.

Further Reading

This section provides more resources on the topic if you are looking to go deeper.

Summary

In this tutorial, you discovered how to develop deep learning models for multi-label classification.

Specifically, you learned:

  • Multi-label classification is a predictive modeling task that involves predicting zero or more mutually non-exclusive class labels.
  • Neural network models can be configured for multi-label classification tasks.
  • How to evaluate a neural network for multi-label classification and make a prediction for new data.

Do you have any questions?
Ask your questions in the comments below and I will do my best to answer.

Develop Deep Learning Projects with Python!

Deep Learning with Python

 What If You Could Develop A Network in Minutes

...with just a few lines of Python

Discover how in my new Ebook:
Deep Learning With Python

It covers end-to-end projects on topics like:
Multilayer PerceptronsConvolutional Nets and Recurrent Neural Nets, and more...

Finally Bring Deep Learning To
Your Own Projects

Skip the Academics. Just Results.

See What's Inside

67 Responses to Multi-Label Classification with Deep Learning

  1. pratyush August 31, 2020 at 11:32 am #

    Hi Jason,

    How to deal if one or more labels are heavily imbalanced?
    We might have high accuracy but F1 score would be low for those.

  2. Pathak Machine August 31, 2020 at 10:24 pm #

    Hello,

    This is really very informative article and thanks for sharing this usefull post.

  3. SWM September 1, 2020 at 1:01 am #

    Sir you are a gem! Great article!

  4. David Rodriguez September 2, 2020 at 10:57 am #

    For larger datasets where we would use train/test/validate would we still use average accuracy to evaluate?

    • Jason Brownlee September 2, 2020 at 1:29 pm #

      No, you would estimate directly from your test set.

  5. MEN September 3, 2020 at 12:37 am #

    Hi Jason,
    Very well explained.
    Thank you

  6. Ryan Wyngard September 4, 2020 at 3:25 pm #

    Jason I appreciate your tutorials

  7. Berns Buenaobra September 4, 2020 at 3:44 pm #

    Had some trouble installing them on Anaconda my solution is the instruction in Autokeras website.To install the package, please use the pip installation as follows:

    pip3 install git+https://github.com/keras-team/keras-tuner.git@1.0.2rc1
    pip3 install autokeras

    Thanks for this Doc Jason will buy the book over the weekend on a Student Discount

  8. Saad September 4, 2020 at 9:17 pm #

    Hi Jason,
    Can Random Forest or XGBoost be used for similar problem of multi-label classification? How viable would that approach be? And are you planning to do any such article in the near future?

    • Jason Brownlee September 5, 2020 at 6:46 am #

      I’m not sure off the cuff, sorry. Perhaps try it and see.

  9. pradeep September 9, 2020 at 5:28 am #

    This is really an awesome tutorial.
    The hidden layer has 20 nodes . Is there any particular logic to choose the number of nodes based on number of input dimensions which is 10 in this case ?

    • Jason Brownlee September 9, 2020 at 6:53 am #

      Thanks!

      No, it was configured via trial and error.

  10. George September 16, 2020 at 10:56 am #

    Hi Jason,
    In MultiLabel if the prediction gives as

    array([[0.4774732 , 0.04919493, 0.47333184]], dtype=float32)

    and data has 3 classes say ‘0’,’1′,’2′

    How we know which class probability belong to which class?
    Thank You

  11. George September 16, 2020 at 5:14 pm #

    Because model.predict_classes(), argmax returns the max probability class

    • Jason Brownlee September 17, 2020 at 6:41 am #

      Yes.

      predict_classes() does an argmax for you.

  12. Anthony The Koala September 18, 2020 at 6:23 pm #

    Dear Dr Jason,
    The predicted yhat[0].round() returns

    * Is the above example predicting multi-variate output

    Thank you,
    Anthony of Sydney

    • Jason Brownlee September 19, 2020 at 6:50 am #

      A multi-label output. I guess you could call it multivariate, but I would not use that phrase.

      • Anthony The Koala September 19, 2020 at 2:36 pm #

        Dear Dr Jason,
        Thank you,
        Anthony of Sydney

      • Anthony The Koala September 19, 2020 at 8:05 pm #

        Dear Dr Jason,
        Given that there are multi-label outputs consisting of only 0 or 1, are there multilabel categories regression models whose outputs belong to the set of integers?

        By set of integers I mean numbers that are like 0,1,2,3,4? NOT 0, 1.39, 2.141, 3.142, 4.23? I mean multi-label integers output categories

        Thank you,
        Anthony of Sydney

        • Jason Brownlee September 20, 2020 at 6:44 am #

          If you require a model to predict integers, you can round/scale the output of a model to meet your needs.

          • Anthony The Koala September 20, 2020 at 11:53 am #

            Dear Dr Jason,
            Thank you for your reply.
            Could you elaborate what kind of multivariate Y models I could generate please.
            Thank you,
            Anthony of Sydney

          • Jason Brownlee September 20, 2020 at 1:35 pm #

            An MLP for regression can output real values in 0-1 that can be scaled by the desired range and rounded.

            Target values in the training data must be prepared in the same way.

  13. Anthony The Koala September 18, 2020 at 10:53 pm #

    Dear Dr Jason,
    In the #define model

    * We know that ’20’ means 20 neurons in the first hidden layer.
    * Is there a rule of thumb to determine how many neurons are in the hidden layer?

    Thank you,
    Anthony of Sydney

      • Anthony The Koala September 19, 2020 at 2:41 pm #

        Dear Dr Jason,
        Thank you for the reply by averting me to the FAQ on adding lawyers and nodes.

        I have a further question not addressed by the FAQ.

        Is it possible that even if you add layers and nodes, would an evaluiation score or accuracy score get worse.

        Put it another way, could an evaluation or accuracy score peak as you add more layers and/or nodes then as you add more layers and/or nodes, the evaluation or accuracy score drops?

        Thank you,
        Anthony of Sydney

        • Jason Brownlee September 20, 2020 at 6:39 am #

          Yes, it is common that adding layers and nodes results in worse performance at some point, particularly if you do not also tune the hyperparametres for the learning algorithm.

  14. Ahmed September 21, 2020 at 1:46 pm #

    Hi Dr Jason,
    Very well explained, Thank you.
    Please , I have multi label classification dataset with large number of labels , labels equal to 1385
    When use this model on my dataset
    The accuracy of training data equal to 15 and
    The accuracy of testing data equal to zero
    How can I do to deal with my multi label dataset with this number of labels ?
    Thanks alot.

  15. PANKAJ PATIL October 4, 2020 at 9:18 pm #

    hi Jason,
    very well explained!
    Just wanted to understand how to achieve following –
    my data looks like below

    Order No Item ID Item Type Box Size
    X A APP C1
    B APP C2
    C FTW C3
    D FTW
    Y B HAZ C1
    C FTW C2
    E APP C3

    Basically, I have orders which can contain multiple products. The products in one order can be grouped into one or multiple boxes, based on certain parameters. my algorithm should be able to predict which products can go into what size of box based on historical data. Is there anyway we can achieve this?

  16. MS October 17, 2020 at 12:05 am #

    Jason,
    Take the example of a binary classification problem of whether an image has a human in it or not. Here the outputs are obviously mutually exclusive. Then why do we use sigmoid functions for such a problem?

    • MS October 17, 2020 at 12:09 am #

      Isn’t sigmoid and softmax same for binary classification problems?

      • Jason Brownlee October 17, 2020 at 6:05 am #

        Sigmoid is for binary classification.
        Softmax is for multi-class classification.

        Here we are doing “multi-label” classification, which is like multiple binary classification problems. So we use sigmoid.

        • MS October 17, 2020 at 10:30 pm #

          The problem that I stated here it’s outputs are mutually exclusive i.e in an image we can either have a human or not. Then why in this binary problem I’m supposed to use sigmoid instead of softmax?

          • Jason Brownlee October 18, 2020 at 6:08 am #

            Sorry, I don’t understand your question. Could you please rephrase or elaborate?

    • Jason Brownlee October 17, 2020 at 6:04 am #

      Classification labels are mutually exclusive.

      In multi-label classification, they are not mutually exclusive.

      See this:
      https://machinelearningmastery.com/types-of-classification-in-machine-learning/

      • MS October 19, 2020 at 9:17 pm #

        What i’m trying to ask is that binary classification problems like whether there is a human in an image or not, this problems output are mutually exclusive. One can either have a human or not have. Why are we using using sigmoid here? Sigmoids don’t sum to one. Hence probability(having a human in the image)+ p(not having human in the image) not equalls to 1. Perhaps use softmax instead for binary classification problems?

        • Jason Brownlee October 20, 2020 at 6:25 am #

          Your problem may be a mutually exclusive classification problem.

          We use sigmoid above because it is a different kind of classification problem where labels are NOT mutually exclusive.

          Is that clearer?

          • MS October 27, 2020 at 11:24 pm #

            Then for binary classification problems where labels are mutually exclusive we should use softmax instead of sigmoid?

          • Jason Brownlee October 28, 2020 at 6:45 am #

            No, mutually exclusive binary classification has a single output and should use sigmoid.

            Softmax is only used for more than two mutually exclusive classes.

  17. MS October 28, 2020 at 11:33 pm #

    thanks Jason

  18. Bob Bee November 6, 2020 at 10:30 am #

    Very nice tutorial. Thank you. I ran it and did some mods experimenting, did try weighted F1 and it was better than accuracy on my mods (double the features and classes — I’m looking to go to x10 classes). Any suggestions on where the classes are not all independent?

    • Jason Brownlee November 6, 2020 at 1:13 pm #

      Nice work!

      It might be worth reviewing the literature to see how vast numbers of labels are supported.

      If they are not independent, then perhaps a multi-pass/hierarchical approach can be used.

  19. SS November 6, 2020 at 11:51 am #

    Hi Jason,
    In MultiLabel if the prediction gives as

    array([[0.4774732 , 0.04919493, 0.47333184]], dtype=float32)

    and data has 3 classes say ‘credit’,’debit′,’loan′

    How we know which class probability belong to which class?
    Thank You

    • Jason Brownlee November 6, 2020 at 1:15 pm #

      Multi-label classes are not mutually exclusive – meaning we can apply multiple labels to a given example.

      The cut-off is 0.5. You can call model.predict_classes() to get labels instead of probabilities or call round().

  20. Paula G November 16, 2020 at 5:10 am #

    Hi Jason, in this tutorial is it possible to use time series as a data set to do a classification for future time steps? I mean, is there any other consideration to take into account?

  21. Vlad November 17, 2020 at 8:21 pm #

    Dear Jason,
    Thank you for this tutorial. Could you please give an advice, how to deal with partial/missing labels of training samples?

    • Jason Brownlee November 18, 2020 at 6:39 am #

      Examples with missing labels/targets cannot be used for training.

  22. DAni November 23, 2020 at 5:18 pm #

    Hi, can you help me with my research problem, I’m doing multi-label classification on research papers (RP) textual content, and RP content is too large and there are many classes in which RP can lie, can you suggest the model for multi-label large scale textual content classification problem.

    • Jason Brownlee November 24, 2020 at 6:18 am #

      I would expect a deep learning language model would play an important part in this application.

Leave a Reply