How to Develop a Weighted Average Ensemble for Deep Learning Neural Networks

Last Updated on

A modeling averaging ensemble combines the prediction from each model equally and often results in better performance on average than a given single model.

Sometimes there are very good models that we wish to contribute more to an ensemble prediction, and perhaps less skillful models that may be useful but should contribute less to an ensemble prediction. A weighted average ensemble is an approach that allows multiple models to contribute to a prediction in proportion to their trust or estimated performance.

In this tutorial, you will discover how to develop a weighted average ensemble of deep learning neural network models in Python with Keras.

After completing this tutorial, you will know:

  • Model averaging ensembles are limited because they require that each ensemble member contribute equally to predictions.
  • Weighted average ensembles allow the contribution of each ensemble member to a prediction to be weighted proportionally to the trust or performance of the member on a holdout dataset.
  • How to implement a weighted average ensemble in Keras and compare results to a model averaging ensemble and standalone models.

Discover how to train faster, reduce overfitting, and make better predictions with deep learning models in my new book, with 26 step-by-step tutorials and full source code.

Let’s get started.

How to Develop a Weighted Average Ensemble for Deep Learning Neural Networks

How to Develop a Weighted Average Ensemble for Deep Learning Neural Networks
Photo by Simon Matzinger, some rights reserved.

Tutorial Overview

This tutorial is divided into six parts; they are:

  1. Weighted Average Ensemble
  2. Multi-Class Classification Problem
  3. Multilayer Perceptron Model
  4. Model Averaging Ensemble
  5. Grid Search Weighted Average Ensemble
  6. Optimized Weighted Average Ensemble

Weighted Average Ensemble

Model averaging is an approach to ensemble learning where each ensemble member contributes an equal amount to the final prediction.

In the case of regression, the ensemble prediction is calculated as the average of the member predictions. In the case of predicting a class label, the prediction is calculated as the mode of the member predictions. In the case of predicting a class probability, the prediction can be calculated as the argmax of the summed probabilities for each class label.

A limitation of this approach is that each model has an equal contribution to the final prediction made by the ensemble. There is a requirement that all ensemble members have skill as compared to random chance, although some models are known to perform much better or much worse than other models.

A weighted ensemble is an extension of a model averaging ensemble where the contribution of each member to the final prediction is weighted by the performance of the model.

The model weights are small positive values and the sum of all weights equals one, allowing the weights to indicate the percentage of trust or expected performance from each model.

One can think of the weight Wk as the belief in predictor k and we therefore constrain the weights to be positive and sum to one.

Learning with ensembles: How over-fitting can be useful, 1996.

Uniform values for the weights (e.g. 1/k where k is the number of ensemble members) means that the weighted ensemble acts as a simple averaging ensemble. There is no analytical solution to finding the weights (we cannot calculate them); instead, the value for the weights can be estimated using either the training dataset or a holdout validation dataset.

Finding the weights using the same training set used to fit the ensemble members will likely result in an overfit model. A more robust approach is to use a holdout validation dataset unseen by the ensemble members during training.

The simplest, perhaps most exhaustive approach would be to grid search weight values between 0 and 1 for each ensemble member. Alternately, an optimization procedure such as a linear solver or gradient descent optimization can be used to estimate the weights using a unit norm weight constraint to ensure that the vector of weights sum to one.

Unless the holdout validation dataset is large and representative, a weighted ensemble has an opportunity to overfit as compared to a simple averaging ensemble.

A simple alternative to adding more weight to a given model without calculating explicit weight coefficients is to add a given model more than once to the ensemble. Although less flexible, it allows a given well-performing model to contribute more than once to a given prediction made by the ensemble.

Want Better Results with Deep Learning?

Take my free 7-day email crash course now (with sample code).

Click to sign-up and also get a free PDF Ebook version of the course.

Download Your FREE Mini-Course

Multi-Class Classification Problem

We will use a small multi-class classification problem as the basis to demonstrate the weighted averaging ensemble.

The scikit-learn class provides the make_blobs() function that can be used to create a multi-class classification problem with the prescribed number of samples, input variables, classes, and variance of samples within a class.

The problem has two input variables (to represent the x and y coordinates of the points) and a standard deviation of 2.0 for points within each group. We will use the same random state (seed for the pseudorandom number generator) to ensure that we always get the same data points.

The results are the input and output elements of a dataset that we can model.

In order to get a feeling for the complexity of the problem, we can plot each point on a two-dimensional scatter plot and color each point by class value.

The complete example is listed below.

Running the example creates a scatter plot of the entire dataset. We can see that the standard deviation of 2.0 means that the classes are not linearly separable (separable by a line) causing many ambiguous points.

This is desirable as it means that the problem is non-trivial and will allow a neural network model to find many different “good enough” candidate solutions resulting in a high variance.

Scatter Plot of Blobs Dataset With Three Classes and Points Colored by Class Value

Scatter Plot of Blobs Dataset With Three Classes and Points Colored by Class Value

Multilayer Perceptron Model

Before we define a model, we need to contrive a problem that is appropriate for the weighted average ensemble.

In our problem, the training dataset is relatively small. Specifically, there is a 10:1 ratio of examples in the training dataset to the holdout dataset. This mimics a situation where we may have a vast number of unlabeled examples and a small number of labeled examples with which to train a model.

We will create 1,100 data points from the blobs problem. The model will be trained on the first 100 points and the remaining 1,000 will be held back in a test dataset, unavailable to the model.

The problem is a multi-class classification problem, and we will model it using a softmax activation function on the output layer. This means that the model will predict a vector with three elements with the probability that the sample belongs to each of the three classes. Therefore, we must one hot encode the class values before we split the rows into the train and test datasets. We can do this using the Keras to_categorical() function.

Next, we can define and compile the model.

The model will expect samples with two input variables. The model then has a single hidden layer with 25 nodes and a rectified linear activation function, then an output layer with three nodes to predict the probability of each of the three classes and a softmax activation function.

Because the problem is multi-class, we will use the categorical cross entropy loss function to optimize the model and the efficient Adam flavor of stochastic gradient descent.

The model is fit for 500 training epochs and we will evaluate the model each epoch on the test set, using the test set as a validation set.

At the end of the run, we will evaluate the performance of the model on the train and test sets.

Then finally, we will plot learning curves of the model accuracy over each training epoch on both the training and validation datasets.

Tying all of this together, the complete example is listed below.

Running the example first prints the shape of each dataset for confirmation, then the performance of the final model on the train and test datasets.

Your specific results will vary (by design!) given the high variance nature of the model.

In this case, we can see that the model achieved about 87% accuracy on the training dataset, which we know is optimistic, and about 81% on the test dataset, which we would expect to be more realistic.

A line plot is also created showing the learning curves for the model accuracy on the train and test sets over each training epoch.

We can see that training accuracy is more optimistic over most of the run as we also noted with the final scores.

Line Plot Learning Curves of Model Accuracy on Train and Test Dataset over Each Training Epoch

Line Plot Learning Curves of Model Accuracy on Train and Test Dataset over Each Training Epoch

Now that we have identified that the model is a good candidate for developing an ensemble, we can next look at developing a simple model averaging ensemble.

Model Averaging Ensemble

We can develop a simple model averaging ensemble before we look at developing a weighted average ensemble.

The results of the model averaging ensemble can be used as a point of comparison as we would expect a well configured weighted average ensemble to perform better.

First, we need to fit multiple models from which to develop an ensemble. We will define a function named fit_model() to create and fit a single model on the training dataset that we can call repeatedly to create as many models as we wish.

We can call this function to create a pool of 10 models.

Next, we can develop model averaging ensemble.

We don’t know how many members would be appropriate for this problem, so we can create ensembles with different sizes from one to 10 members and evaluate the performance of each on the test set.

We can also evaluate the performance of each standalone model in the performance on the test set. This provides a useful point of comparison for the model averaging ensemble, as we expect that the ensemble will out-perform a randomly selected single model on average.

Each model predicts the probabilities for each class label, e.g. has three outputs. A single prediction can be converted to a class label by using the argmax() function on the predicted probabilities, e.g. return the index in the prediction with the largest probability value. We can ensemble the predictions from multiple models by summing the probabilities for each class prediction and using the argmax() on the result. The ensemble_predictions() function below implements this behavior.

We can estimate the performance of an ensemble of a given size by selecting the required number of models from the list of all models, calling the ensemble_predictions() function to make a prediction, then calculating the accuracy of the prediction by comparing it to the true values. The evaluate_n_members() function below implements this behavior.

The scores of the ensembles of each size can be stored to be plotted later, and the scores for each individual model are collected and the average performance reported.

Finally, we create a graph that shows the accuracy of each individual model (blue dots) and the performance of the model averaging ensemble as the number of members is increased from one to 10 members (orange line).

Tying all of this together, the complete example is listed below.

Running the example first reports the performance of each single model as well as the model averaging ensemble of a given size with 1, 2, 3, etc. members.

Your results will vary given the stochastic nature of the training algorithm.

On this run, the average performance of the single models is reported at about 80.4% and we can see that an ensemble with between five and nine members will achieve a performance between 80.8% and 81%. As expected, the performance of a modest-sized model averaging ensemble out-performs the performance of a randomly selected single model on average.

Next, a graph is created comparing the accuracy of single models (blue dots) to the model averaging ensemble of increasing size (orange line).

On this run, the orange line of the ensembles clearly shows better or comparable performance (if dots are hidden) than the single models.

Line Plot Showing Single Model Accuracy (blue dots) and Accuracy of Ensembles of Increasing Size (orange line)

Line Plot Showing Single Model Accuracy (blue dots) and Accuracy of Ensembles of Increasing Size (orange line)

Now that we know how to develop a model averaging ensemble, we can extend the approach one step further by weighting the contributions of the ensemble members.

Grid Search Weighted Average Ensemble

The model averaging ensemble allows each ensemble member to contribute an equal amount to the prediction of the ensemble.

We can update the example so that instead, the contribution of each ensemble member is weighted by a coefficient that indicates the trust or expected performance of the model. Weight values are small values between 0 and 1 and are treated like a percentage, such that the weights across all ensemble members sum to one.

First, we must update the ensemble_predictions() function to make use of a vector of weights for each ensemble member.

Instead of simply summing the predictions across each ensemble member, we must calculate a weighted sum. We can implement this manually using for loops, but this is terribly inefficient; for example:

Instead, we can use efficient NumPy functions to implement the weighted sum such as einsum() or tensordot().

Full discussion of these functions is a little out of scope so please refer to the API documentation for more information on how to use these functions as they are challenging if you are new to linear algebra and/or NumPy. We will use tensordot() function to apply the tensor product with the required summing; the updated ensemble_predictions() function is listed below.

Next, we must update evaluate_ensemble() to pass along the weights when making the prediction for the ensemble.

We will use a modest-sized ensemble of five members, that appeared to perform well in the model averaging ensemble.

We can then estimate the performance of each individual model on the test dataset as a reference.

Next, we can use a weight of 1/5 or 0.2 for each of the five ensemble members and use the new functions to estimate the performance of a model averaging ensemble, a so-called equal-weight ensemble.

We would expect this ensemble to perform as well or better than any single model.

Finally, we can develop a weighted average ensemble.

A simple, but exhaustive approach to finding weights for the ensemble members is to grid search values. We can define a course grid of weight values from 0.0 to 1.0 in steps of 0.1, then generate all possible five-element vectors with those values. Generating all possible combinations is called a Cartesian product, which can be implemented in Python using the itertools.product() function from the standard library.

A limitation of this approach is that the vectors of weights will not sum to one (called the unit norm), as required. We can force reach generated weight vector to have a unit norm by calculating the sum of the absolute weight values (called the L1 norm) and dividing each weight by that value. The normalize() function below implements this hack.

We can now enumerate each weight vector generated by the Cartesian product, normalize it, and evaluate it by making a prediction and keeping the best to be used in our final weight averaging ensemble.

Once discovered, we can report the performance of our weight average ensemble on the test dataset, which we would expect to be better than the best single model and ideally better than the model averaging ensemble.

The complete example is listed below.

Running the example first creates the five single models and evaluates their performance on the test dataset.

Your specific results will vary given the stochastic nature of the learning algorithm.

On this run, we can see that model 2 has the best solo performance of about 81.7% accuracy.

Next, a model averaging ensemble is created with a performance of about 80.7%, which is reasonable compared to most of the models, but not all.

Next, the grid search is performed. It is pretty slow and may take about twenty minutes on modern hardware. The process could easily be made parallel using libraries such as Joblib.

Each time a new top performing set of weights is discovered, it is reported along with its performance on the test dataset. We can see that during the run, the process discovered that using model 2 alone resulted in a good performance, until it was replaced with something better.

We can see that the best performance was achieved on this run using the weights that focus only on the first and second models with the accuracy of 81.8% on the test dataset. This out-performs both the single models and the model averaging ensemble on the same dataset.

An alternate approach to finding weights would be a random search, which has been shown to be effective more generally for model hyperparameter tuning.

Weighted Average MLP Ensemble

An alternative to searching for weight values is to use a directed optimization process.

Optimization is a search process, but instead of sampling the space of possible solutions randomly or exhaustively, the search process uses any available information to make the next step in the search, such as toward a set of weights that has lower error.

The SciPy library offers many excellent optimization algorithms, including local and global search methods.

SciPy provides an implementation of the Differential Evolution method. This is one of the few stochastic global search algorithms that “just works” for function optimization with continuous inputs, and it works well.

The differential_evolution() SciPy function requires that function is specified to evaluate a set of weights and return a score to be minimized. We can minimize the classification error (1 – accuracy).

As with the grid search, we most normalize the weight vector before we evaluate it. The loss_function() function below will be used as the evaluation function during the optimization process.

We must also specify the bounds of the optimization process. We can define the bounds as a five-dimensional hypercube (e.g. 5 weights for the 5 ensemble members) with values between 0.0 and 1.0.

Our loss function requires three parameters in addition to the weights, which we will provide as a tuple to then be passed along to the call to the loss_function() each time a set of weights is evaluated.

We can now call our optimization process.

We will limit the total number of iterations of the algorithms to 1,000, and use a smaller than default tolerance to detect if the search process has converged.

The result of the call to differential_evolution() is a dictionary that contains all kinds of information about the search.

Importantly, the ‘x‘ key contains the optimal set of weights found during the search. We can retrieve the best set of weights, then report them and their performance on the test set when used in a weighted ensemble.

Tying all of this together, the complete example is listed below.

Running the example first creates five single models and evaluates the performance of each on the test dataset.

Your specific results will vary given the stochastic nature of the learning algorithm.

We can see on this run that models 3 and 4 both perform best with an accuracy of about 82.2%.

Next, a model averaging ensemble with all five members is evaluated on the test set reporting an accuracy of 81.8%, which is better than some, but not all, single models.

The optimization process is relatively quick.

We can see that the process found a set of weights that pays most attention to models 3 and 4, and spreads the remaining attention out among the other models, achieving an accuracy of about 82.4%, out-performing the model averaging ensemble and individual models.

It is important to note that in these examples, we have treated the test dataset as though it were a validation dataset. This was done to keep the examples focused and technically simpler. In practice, the choice and tuning of the weights for the ensemble would be chosen by a validation dataset, and single models, model averaging ensembles, and weighted ensembles would be compared on a separate test set.

Extensions

This section lists some ideas for extending the tutorial that you may wish to explore.

  • Parallelize Grid Search. Update the grid search example to use the Joblib library to parallelize weight evaluation.
  • Implement Random Search. Update the grid search example to use a random search of weight coefficients.
  • Try a Local Search. Try a local search procedure provided by the SciPy library instead of the global search and compare performance.
  • Repeat Global Optimization. Repeat the global optimization procedure multiple times for a given set of models to see if differing sets of weights can be found across the runs.

If you explore any of these extensions, I’d love to know.

Further Reading

This section provides more resources on the topic if you are looking to go deeper.

Papers

API

Articles

Summary

In this tutorial, you discovered how to develop a weighted average ensemble of deep learning neural network models in Python with Keras.

Specifically, you learned:

  • Model averaging ensembles are limited because they require that each ensemble member contribute equally to predictions.
  • Weighted average ensembles allow the contribution of each ensemble member to a prediction to be weighted proportionally to the trust or performance of the member on a holdout dataset.
  • How to implement a weighted average ensemble in Keras and compare results to a model averaging ensemble and standalone models.

Do you have any questions?
Ask your questions in the comments below and I will do my best to answer.


Develop Better Deep Learning Models Today!

Better Deep Learning

Train Faster, Reduce Overftting, and Ensembles

…with just a few lines of python code

Discover how in my new Ebook:
Better Deep Learning

It provides self-study tutorials on topics like: weight decay, batch normalization, dropout, model stacking and much more…

Bring better deep learning to your projects!

Skip the Academics. Just Results.

Click to learn more.


28 Responses to How to Develop a Weighted Average Ensemble for Deep Learning Neural Networks

  1. Shripad Bhat December 28, 2018 at 5:54 pm #

    Great article Jason! Thank you..

    Additionally using models whose error terms are not correlated yield better results..

    One query: some suggest giving weights inversely proportional to RMSE or directly proportional to accuracy measures. Do you find weights derived from this method are similar to the weights derived from grid search? Or do they differ?

    Thanks in advance Jason.

    • Jason Brownlee December 29, 2018 at 5:50 am #

      I prefer to use a global optimization algorithm to find robust weights.

  2. Jon B. Ramar December 30, 2018 at 10:54 am #

    Hi Jason, nice write-up, thanks for sharing!

    Try Local Search with Scipy optimization library, initializing weight with the coefficients of a Linear, Ridge, or Lasso regression. It will only take a few seconds but will have similar performance as the grid search.

    • Jason Brownlee December 31, 2018 at 6:03 am #

      Great suggestion, do you think it would out-perform a global search like DE though?

      I’m skeptical as I think the error surface is highly non-linear and probably multi-modal.

  3. Jay Urbain December 31, 2018 at 2:10 am #

    Very nice example.

    Thanks,
    Jay

  4. Jerry Zhang January 1, 2019 at 1:46 am #

    Great article,thanks Jason.
    I have some concern with the weighted average ensemble. Will it worsen the overfitting problem?After all,machine learning algorithm is already prone to overfitting,now is giving the different models different weights another level of overfitting?Is it really better than the normal average weight version in the out-sample prediction?

    • Jason Brownlee January 1, 2019 at 6:27 am #

      It is a risk, but the risk can be lessened by using a separate validation dataset or out of sample data to fit the weights.

  5. Markus January 3, 2019 at 6:02 am #

    Hi

    The article says:

    “Your results will vary given the stochastic nature of the training algorithm.”

    Which I don’t really understand as the make_blob function call makes use of random_state parameter, so it’s output should be deterministic. So I wonder where exactly the differences of the results come from?

    Thanks

    • Jason Brownlee January 3, 2019 at 6:15 am #

      The differences come from the stochastic initialization and training of the model/s.

  6. PC June 7, 2019 at 1:07 pm #

    Hi Jason,

    As always I find a solution to a problem that I have, in your article. Thank you .

    Can the DE implementation be done using only sklearn and not keras. If so can you please suggest a resource on that?.

  7. AP June 11, 2019 at 7:51 pm #

    Hi,
    While using another dataset after execution of this block

    def ensemble_predictions(members, weights, x_test):
    yhats = [model.predict(x_test) for model in members]
    yhats = array(yhats)
    # sum across ensemble members
    summed = tensordot(yhats, weights, axes=((0),(0)))
    # argmax across classes
    result = argmax(summed, axis=1)

    I get the following error:

    ~\Anaconda3\lib\site-packages\numpy\core\fromnumeric.py in argmax(a, axis, out)
    961
    962 “””
    –> 963 return _wrapfunc(a, ‘argmax’, axis=axis, out=out)
    964
    965

    ~\Anaconda3\lib\site-packages\numpy\core\fromnumeric.py in _wrapfunc(obj, method, *args, **kwds)
    55 def _wrapfunc(obj, method, *args, **kwds):
    56 try:
    —> 57 return getattr(obj, method)(*args, **kwds)
    AxisError: axis 1 is out of bounds for array of dimension 1

    Can you please suggest a solution to get rid of this.
    Thank you.

    • Jason Brownlee June 12, 2019 at 7:55 am #

      Perhaps check that your dataset was loaded correctly and the model was suitable modified to account for the number of features in your dataset.

      • AP June 12, 2019 at 11:31 am #

        Thank You, Jason. I checked and got the individual performance accuracy of 4 models.

        Can you please show how the output should look like after execution of the code below.

        >>summed = tensordot(yhats, weights, axes=((0),(0))) #summed = np.sum(yhats, axis=0)
        >>print(“summed”,summed)

        After summing up equal weights(0.25) with the predicted result yhats for 4 models I am getting something like this

        summed [ 1.5 0.5 2. 1. 1. 2. 1.25 1.25 1. 1.5 0. 1. 2.
        0. 1.75 1. 0.5 0.5 1. 2. 1.25 1.5 0. 0.5 1.75
        1. 0. 0. 1. 0. 1. 0. 2. 1. 1. 1.5 2. 1.
        1. 1. 1. 1. 1. 2. 1. 1. 1. 2. 1.25 1. 1.
        2. 1.5 0.5 1. 0. 1. 1. 0.5 1.5 0. 0. 0.
        1.25 0. 1. 1.25 0. 2. 0.5 2. 1.25 0.5 1. 2. 0.5
        2. 0.5 1. 2. 1.5 2. 0. 1.5 1.25 2. 1.5 1.25
        1.5 1.75 0. 1. 1. 2. 1.5 0. ]

        Is this correct?

        • Jason Brownlee June 12, 2019 at 2:23 pm #

          Sorry, I cannot run or debug modified versions of the tutorial for tutorial for you.

        • AP June 12, 2019 at 3:02 pm #

          The Shape of X_train and X_test is (384, 16) (96, 16) respectively

  8. AP June 12, 2019 at 3:06 pm #

    Sent the next text before noticing your response.

    I just wanted to know if the structure after summing of weights should look like this.

    Never mind. I shall try.

    Thank you for your prompt reply.

  9. Jie June 17, 2019 at 8:34 am #

    Hi great article, I have few concerns, no matter the stacking or ensemble method, the models should try to capture different aspects of data or predict different results before feeding to the ensemble, thus we can make huge difference on the accuracy not just based on the random seed on one algorithm.

  10. Christo June 18, 2019 at 4:14 pm #

    Hi Jason,

    Very informative article.
    How to convert the ensemble create to be used with a fit method without loop like this

    ensemble.fit ?

    Thanks.

  11. Christo June 21, 2019 at 7:47 pm #

    Hi Jason,

    Is there any default value for mutation and crossover parameter in the DifferentialEvolution method used here? or is it ok to not uses these.

  12. William July 7, 2019 at 1:21 pm #

    Hi, nice work, a bug should be changed, the line “y = to_categorical(y)”, this will change y many times if y always exists in memory, like in jupyter

    • Jason Brownlee July 8, 2019 at 8:37 am #

      Thanks, but the script is designed to be run once from the command line.

Leave a Reply