How to Develop Super Learner Ensembles in Python

Last Updated on

Selecting a machine learning algorithm for a predictive modeling problem involves evaluating many different models and model configurations using k-fold cross-validation.

The super learner is an ensemble machine learning algorithm that combines all of the models and model configurations that you might investigate for a predictive modeling problem and uses them to make a prediction as-good-as or better than any single model that you may have investigated.

The super learner algorithm is an application of stacked generalization, called stacking or blending, to k-fold cross-validation where all models use the same k-fold splits of the data and a meta-model is fit on the out-of-fold predictions from each model.

In this tutorial, you will discover the super learner ensemble machine learning algorithm.

After completing this tutorial, you will know:

  • Super learner is the application of stacked generalization using out-of-fold predictions during k-fold cross-validation.
  • The super learner ensemble algorithm is straightforward to implement in Python using scikit-learn models.
  • The ML-Ensemble (mlens) library provides a convenient implementation that allows the super learner to be fit and used in just a few lines of code.

Let’s get started.

  • Update Jan/2020: Updated for changes in scikit-learn v0.22 API.
How to Develop Super Learner Ensembles in Python

How to Develop Super Learner Ensembles in Python
Photo by Mark Gunn, some rights reserved.

Tutorial Overview

This tutorial is divided into three parts; they are:

  1. What Is the Super Learner?
  2. Manually Develop a Super Learner With scikit-learn
  3. Super Learner With ML-Ensemble Library

What Is the Super Learner?

There are many hundreds of models to choose from for a predictive modeling problem; which one is best?

Then, after a model is chosen, how do you best configure it for your specific dataset?

These are open questions in applied machine learning. The best answer we have at the moment is to use empirical experimentation to test and discover what works best for your dataset.

In practice, it is generally impossible to know a priori which learner will perform best for a given prediction problem and data set.

Super Learner, 2007.

This involves selecting many different algorithms that may be appropriate for your regression or classification problem and evaluating their performance on your dataset using a resampling technique, such as k-fold cross-validation.

The algorithm that performs the best on your dataset according to k-fold cross-validation is then selected, fit on all available data, and you can then start using it to make predictions.

There is an alternative approach.

Consider that you have already fit many different algorithms on your dataset, and some algorithms have been evaluated many times with different configurations. You may have many tens or hundreds of different models of your problem. Why not use all those models instead of the best model from the group?

This is the intuition behind the so-called “super learner” ensemble algorithm.

The super learner algorithm involves first pre-defining the k-fold split of your data, then evaluating all different algorithms and algorithm configurations on the same split of the data. All out-of-fold predictions are then kept and used to train a that learns how to best combine the predictions.

The algorithms may differ in the subset of the covariates used, the basis functions, the loss functions, the searching algorithm, and the range of tuning parameters, among others.

Super Learner In Prediction, 2010.

The results of this model should be no worse than the best performing model evaluated during k-fold cross-validation and has the likelihood of performing better than any single model.

The super learner algorithm was proposed by Mark van der Laan, Eric Polley, and Alan Hubbard from Berkeley in their 2007 paper titled “Super Learner.” It was published in a biological journal, which may be sheltered from the broader machine learning community.

The super learner technique is an example of the general method called “stacked generalization,” or “stacking” for short, and is known in applied machine learning as blending, as often a linear model is used as the meta-model.

The super learner is related to the stacking algorithm introduced in neural networks context …

Super Learner In Prediction, 2010.

For more on the topic stacking, see the posts:

We can think of the “super learner” as the supplicating of stacking specifically to k-fold cross-validation.

I have sometimes seen this type of blending ensemble referred to as a cross-validation ensemble.

The procedure can be summarized as follows:

  • 1. Select a k-fold split of the training dataset.
  • 2. Select m base-models or model configurations.
  • 3. For each basemodel:
    • a. Evaluate using k-fold cross-validation.
    • b. Store all out-of-fold predictions.
    • c. Fit the model on the full training dataset and store.
  • 4. Fit a meta-model on the out-of-fold predictions.
  • 5. Evaluate the model on a holdout dataset or use model to make predictions.

The image below, taken from the original paper, summarizes this data flow.

Diagram Showing the Data Flow of the Super Learner Algorithm

Diagram Showing the Data Flow of the Super Learner Algorithm
Taken from “Super Learner.”

Let’s take a closer look at some common sticking points you may have with this procedure.

Q. What are the inputs and outputs for the meta-model?

The meta-model takes in predictions from base-models as input and predicts the target for the training dataset as output:

  • Input: Predictions from base-models.
  • Output: Prediction for training dataset.

For example, if we had 50 base-models, then one input sample would be a vector with 50 values, each value in the vector representing a prediction from one of the base-models for one sample of the training dataset.

If we had 1,000 examples (rows) in the training dataset and 50 models, then the input data for the meta-model would be 1,000 rows and 50 columns.

Q. Won’t the meta-model overfit the training data?

Probably not.

This is the trick of the super learner, and the stacked generalization procedure in general.

The input to the meta-model is the out-of-fold (out-of-sample) predictions. In aggregate, the out-of-fold predictions for a model represent the model’s skill or capability in making predictions on data not seen during training.

By training a meta-model on out-of-sample predictions of other models, the meta-model learns how to both correct the out-of-sample predictions for each model and to best combine the out-of-sample predictions from multiple models; actually, it does both tasks at the same time.

Importantly, to get an idea of the true capability of the meta-model, it must be evaluated on new out-of-sample data. That is, data not used to train the base models.

Q. Can this work for regression and classification?

Yes, it was described in the papers for regression (predicting a numerical value).

It can work just as well for classification (predicting a class label), although it is probably best to predict probabilities to give the meta-model more granularity when combining predictions.

Q. Why do we fit each base-model on the entire training dataset?

Each base-model is fit on the entire training dataset so that the model can be used later to make predictions on new examples not seen during training.

This step is strictly not required until predictions are needed by the super learner.

Q. How do we make a prediction?

To make a prediction on a new sample (row of data), first, the row of data is provided as input to each base model to generate a prediction from each model.

The predictions from the base-models are then concatenated into a vector and provided as input to the meta-model. The meta-model then makes a final prediction for the row of data.

We can summarize this procedure as follows:

  • 1. Take a sample not seen by the models during training.
  • 2. For each base-model:
    • a. Make a prediction given the sample.
    • b. Store prediction.
  • 3. Concatenate predictions from submodel into a single vector.
  • 4. Provide vector as input to the meta-model to make a final prediction.

Now that we are familiar with the super learner algorithm, let’s look at a worked example.

Manually Develop a Super Learner With scikit-learn

The Super Learner algorithm is relatively straightforward to implement on top of the scikit-learn Python machine learning library.

In this section, we will develop an example of super learning for both regression and classification that you can adapt to your own problems.

Super Learner for Regression

We will use the make_regression() test problem and generate 1,000 examples (rows) with 100 features (columns). This is a simple regression problem with a linear relationship between input and output, with added noise.

We will split the data so that 50 percent is used for training the model and 50 percent is held back to evaluate the final super model and base-models.

Next, we will define a bunch of different regression models.

In this case, we will use nine different algorithms with modest configuration. You can use any models or model configurations you like.

The get_models() function below defines all of the models and returns them as a list.

Next, we will use k-fold cross-validation to make out-of-fold predictions that will be used as the dataset to train the meta-model or “super learner.”

This involves first splitting the data into k folds; we will use 10. For each fold, we will fit the model on the training part of the split and make out-of-fold predictions on the test part of the split. This is repeated for each model and all out-of-fold predictions are stored.

Each out-of-fold prediction will be a column for the meta-model input. We will collect columns from each algorithm for one fold of the data, horizontally stacking the rows. Then for all groups of columns we collect, we will vertically stack these rows into one long dataset with 500 rows and nine columns.

The get_out_of_fold_predictions() function below does this for a given test dataset and list of models; it will return the input and output dataset required to train the meta-model.

We can then call the function to get the models and the function to prepare the meta-model dataset.

Next, we can fit all of the base-models on the entire training dataset.

Then, we can fit the meta-model on the prepared dataset.

In this case, we will use a linear regression model as the meta-model, as was used in the original paper.

Next, we can evaluate the base-models on the holdout dataset.

And, finally, use the super learner (base and meta-model) to make predictions on the holdout dataset and evaluate the performance of the approach.

The super_learner_predictions() function below will use the meta-model to make predictions for new data.

We can call this function and evaluate the results.

Tying this all together, the complete example of a super learner algorithm for regression using scikit-learn models is listed below.

Running the example first reports the shape of the prepared dataset, then the shape of the dataset for the meta-model.

Next, the performance of each base-model is reported on the holdout dataset, and finally, the performance of the super learner on the holdout dataset.

Your specific results will differ given the stochastic nature of the dataset and learning algorithms. Try running the example a few times.

In this case, we can see that the linear models perform well on the dataset and the nonlinear algorithms not so well.

We can also see that the super learner out-performed all of the base-models.

You can imagine plugging in all kinds of different models into this example, including XGBoost and Keras deep learning models.

Now that we have seen how to develop a super learner for regression, let’s look at an example for classification.

Super Learner for Classification

The super learner algorithm for classification is much the same.

The inputs to the meta learner can be class labels or class probabilities, with the latter more likely to be useful given the increased granularity or uncertainty captured in the predictions.

In this problem, we will use the make_blobs() test classification problem and use 1,000 examples with 100 input variables and two class labels.

Next, we can change the get_models() function to define a suite of linear and nonlinear classification algorithms.

Next, we can change the get_out_of_fold_predictions() function to predict probabilities by a call to the predict_proba() function.

A Logistic Regression algorithm instead of a Linear Regression algorithm will be used as the meta-algorithm in the fit_meta_model() function.

And classification accuracy will be used to report model performance.

The complete example of the super learner algorithm for classification using scikit-learn models is listed below.

As before, the shape of the dataset and the prepared meta dataset is reported, followed by the performance of the base-models on the holdout dataset and finally the super model itself on the holdout dataset.

Your specific results will differ given the stochastic nature of the dataset and learning algorithms. Try running the example a few times.

In this case, we can see that the super learner has slightly better performance than the base learner algorithms.

Super Learner With ML-Ensemble Library

Implementing the super learner manually is a good exercise but is not ideal.

We may introduce bugs in the implementation and the example as listed does not make use of multiple cores to speed up the execution.

Thankfully, Sebastian Flennerhag provides an efficient and tested implementation of the Super Learner algorithm and other ensemble algorithms in his ML-Ensemble (mlens) Python library. It is specifically designed to work with scikit-learn models.

First, the library must be installed, which can be achieved via pip, as follows:

Next, a SuperLearner class can be defined, models added via a call to the add() function, the meta learner added via a call to the add_meta() function, then the model used like any other scikit-learn model.

We can use this class on the regression and classification problems from the previous section.

Super Learner for Regression With the ML-Ensemble Library

First, we can define a function to calculate RMSE for our problem that the super learner can use to evaluate base-models.

Next, we can configure the SuperLearner with 10-fold cross-validation, our evaluation function, and the use of the entire training dataset when preparing out-of-fold predictions to use as input for the meta-model.

The get_super_learner() function below implements this.

We can then fit the model on the training dataset.

Once fit, we can get a nice report of the performance of each of the base-models on the training dataset using k-fold cross-validation by accessing the “data” attribute on the model.

And that’s all there is to it.

Tying this together, the complete example of evaluating a super learner using the mlens library for regression is listed below.

Running the example first reports the RMSE for (score-m) for each base-model, then reports the RMSE for the super learner itself.

Fitting and evaluating is very fast given the use of multi-threading in the backend allowing all cores of your machine to be used.

Your specific results will differ given the stochastic nature of the dataset and learning algorithms. Try running the example a few times.

In this case, we can see that the super learner performs well.

Note that we cannot compare the base learner scores in the table to the super learner as the base learners were evaluated on the training dataset only, not the holdout dataset.

Super Learner for Classification With the ML-Ensemble Library

The ML-Ensemble is also very easy to use for classification problems, following the same general pattern.

In this case, we will use our list of classifier models and a logistic regression model as the meta-model.

The complete example of fitting and evaluating a super learner model for a test classification problem with the mlens library is listed below.

Running the example summarizes the shape of the dataset, the performance of the base-models, and finally the performance of the super learner on the holdout dataset.

Your specific results will differ given the stochastic nature of the dataset and learning algorithms. Try running the example a few times.

Again, we can see that the super learner performs well on this test problem, and more importantly, is fit and evaluated very quickly as compared to the manual example in the previous section.

Further Reading

This section provides more resources on the topic if you are looking to go deeper.

Tutorials

Books

Papers

R Software

Python Software

Summary

In this tutorial, you discovered the super learner ensemble machine learning algorithm.

Specifically, you learned:

  • Super learner is the application of stacked generalization using out-of-fold predictions during k-fold cross-validation.
  • The super learner ensemble algorithm is straightforward to implement in Python using scikit-learn models.
  • The ML-Ensemble (mlens) library provides a convenient implementation that allows the super learner to be fit and used in just a few lines of code.

Do you have any questions?
Ask your questions in the comments below and I will do my best to answer.

Discover Fast Machine Learning in Python!

Master Machine Learning With Python

Develop Your Own Models in Minutes

...with just a few lines of scikit-learn code

Learn how in my new Ebook:
Machine Learning Mastery With Python

Covers self-study tutorials and end-to-end projects like:
Loading data, visualization, modeling, tuning, and much more...

Finally Bring Machine Learning To
Your Own Projects

Skip the Academics. Just Results.

See What's Inside

27 Responses to How to Develop Super Learner Ensembles in Python

  1. Mark Littlewood December 11, 2019 at 10:46 pm #

    Great article Jason, clear and very useful for me. Does the SuperLearner import split 50% as in your example ?

    • Jason Brownlee December 12, 2019 at 6:22 am #

      Thanks.

      Yes, I split the data 50/50 in the final example.

  2. Pawel December 12, 2019 at 12:49 am #

    Thanks a lot Jason!

    Could you explain how could I make a prediction with already trained super learner model on single row?

    All the best!

    • Jason Brownlee December 12, 2019 at 6:28 am #

      If you are using the custom code, then see the super_learner_predictions() function.

      If you are using the lib, call ensemble.predict(…)

  3. Mark Littlewood December 12, 2019 at 4:11 am #

    Apologies I can see that 0.5 is set, ignore last question

  4. Markus December 12, 2019 at 6:43 am #

    Hi

    Interestingly no matter how many times I run the first complete example, RMSE of LinearRegression is less than Super Learner!

    Following one sample output:

    Train (500, 100) (500,) Test (500, 100) (500,)
    Meta (500, 9) (500,)
    LinearRegression: RMSE 0.562
    ElasticNet: RMSE 67.114
    SVR: RMSE 176.879
    DecisionTreeRegressor: RMSE 162.378
    KNeighborsRegressor: RMSE 156.142
    AdaBoostRegressor: RMSE 103.183
    BaggingRegressor: RMSE 118.581
    RandomForestRegressor: RMSE 121.637
    ExtraTreesRegressor: RMSE 109.636
    Super Learner: RMSE 0.571

    Here another one:
    Train (500, 100) (500,) Test (500, 100) (500,)
    Meta (500, 9) (500,)
    LinearRegression: RMSE 0.509
    ElasticNet: RMSE 64.889
    SVR: RMSE 173.591
    DecisionTreeRegressor: RMSE 169.789
    KNeighborsRegressor: RMSE 155.547
    AdaBoostRegressor: RMSE 96.808
    BaggingRegressor: RMSE 119.754
    RandomForestRegressor: RMSE 112.420
    ExtraTreesRegressor: RMSE 110.969
    Super Learner: RMSE 0.519

    And here the third one:
    Train (500, 100) (500,) Test (500, 100) (500,)
    Meta (500, 9) (500,)
    LinearRegression: RMSE 0.540
    ElasticNet: RMSE 51.105
    SVR: RMSE 137.585
    DecisionTreeRegressor: RMSE 126.397
    KNeighborsRegressor: RMSE 122.300
    AdaBoostRegressor: RMSE 73.785
    BaggingRegressor: RMSE 79.778
    RandomForestRegressor: RMSE 81.047
    ExtraTreesRegressor: RMSE 74.907
    Super Learner: RMSE 0.545

    Thanks!

    • Pawel December 12, 2019 at 6:56 am #

      Had the same issue with classification.
      XGB almost always was better than Super learner 🤔

    • Jason Brownlee December 12, 2019 at 1:40 pm #

      Nice work!

      Yes, the chosen task might be too trivial and the ensemble messes it up.

  5. Mark Littlewood December 12, 2019 at 10:01 am #

    What impact would an unbalanced dataset have on choice of individual learner algorithms and the ability for the super learner to improve

    • Jason Brownlee December 12, 2019 at 1:42 pm #

      Probably predict calibrated probabilities and use a model to best combine those probabilities.

      That is my first off-the-cuff thought.

      Also, use metrics that focus on what’s important, e.g. f-measure or g-mean, etc.

  6. Bartosz December 13, 2019 at 6:49 am #

    Hello,

    I’m a total beginner in coding the ML. However, I know the concepts of modelling.
    Could you throw out the libraries you import in the code because I’m very interested in Super Learner and I would like to use a Spyder IDE to better understand what is happening here.

  7. Carlos December 13, 2019 at 9:26 am #

    Thanks Jason,

    Two questions:

    + Have you tried what could be the performance with H2O’s approach for the same classification example: http://docs.h2o.ai/h2o/latest-stable/h2o-docs/data-science/stacked-ensembles.html

    + And is this issue already covered in any of your books?.

    Also, it could be worth to consider what is the performance of this new Python library AutoViML: https://towardsdatascience.com/why-automl-is-an-essential-new-tool-for-data-scientists-2d9ab4e25e46

    Thanks a lot for everything you share!.
    Carlos.

    • Jason Brownlee December 13, 2019 at 1:41 pm #

      I have not used H20, sorry.

      I am not familiar wth AutoViML. Thanks for sharing.

    • Ram January 9, 2020 at 8:58 am #

      Carlos
      You might want to take this same data set and try with H2O AutoML and Auto_ViML and compare their performance against the Super Learner. You might want to play back results here.

  8. Justin Mackie December 13, 2019 at 1:26 pm #

    Very lucid explanations, as always! Thank you!

    Readers should note Stacking Classifier and Regressor are new features in scikit-learn 0.22, released December 3, 2019. The link to the Classifier is below.

    https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.StackingClassifier.html

    mlens looks like it has been around for years and is speed and memory optimized. It has some advanced model selection features.

    mlens: https://github.com/flennerhag/mlens

    • Jason Brownlee December 13, 2019 at 1:45 pm #

      Thanks Justin!

      Yes, I hope to write a tutorial on the new stacking api.

  9. Nel December 14, 2019 at 6:17 am #

    Another great article, thanks Jason!
    Could you explain the main difference between this model and the stacking ensemble model (https://machinelearningmastery.com/stacking-ensemble-for-deep-learning-neural-networks/). I mean rather than their different sub-models and meta-models architecture.

    Thanks

    • Jason Brownlee December 14, 2019 at 6:28 am #

      Thanks!

      Not much difference at all. In fact, the super learner is a type of stacking.

      I mention this in the post.

  10. venkatesh siddamsetti December 14, 2019 at 6:47 pm #

    Nice Article Jason.

    I am just a beginner in ensemble learning. In the above example(Classification) did you considered probability predictions of base models as training data to the 2nd level logistic regression? or have you taken direct predictions of base model as training set ?

    • Jason Brownlee December 15, 2019 at 6:04 am #

      Thanks!

      Yes predictions from first level models are fed as input to second level models.

  11. Branda January 11, 2020 at 9:13 am #

    Thank you so much for your contribution,

    I just would like to ask how can I incorporate another model which would not be a machine learning model. This model is very powerful for my problem and I would like to incorporate it into the get_model() part.

    Thank you in advance.

    • Jason Brownlee January 12, 2020 at 7:57 am #

      One approach would be to use a stacking ensemble directly and only take the predictions from your good model and combine them with the meta model.

      Another approach might be to wrap your powerful model in a Classifier class from the scikit-learn library.

  12. Suraj January 16, 2020 at 4:40 am #

    Dear Jason
    thank you so much for your generous work (Blog).

    I am looking forward to understanding the math behind SL. I mean the methodology behind Super Learning. how it starts with the regression dataset and goes with CV and ends with the best-predicted value. If possible, could you enlight me, please?

    regards!
    Suraj

    • Jason Brownlee January 16, 2020 at 6:24 am #

      You’re welcome.

      See the “further reading” section for papers and books.

Leave a Reply