How to Develop a Stacking Ensemble for Deep Learning Neural Networks in Python With Keras

Model averaging is an ensemble technique where multiple sub-models contribute equally to a combined prediction.

Model averaging can be improved by weighting the contributions of each sub-model to the combined prediction by the expected performance of the submodel. This can be extended further by training an entirely new model to learn how to best combine the contributions from each submodel. This approach is called stacked generalization, or stacking for short, and can result in better predictive performance than any single contributing model.

In this tutorial, you will discover how to develop a stacked generalization ensemble for deep learning neural networks.

After completing this tutorial, you will know:

  • Stacked generalization is an ensemble method where a new model learns how to best combine the predictions from multiple  existing models.
  • How to develop a stacking model using neural networks as a submodel and a scikit-learn classifier as the meta-learner.
  • How to develop a stacking model where neural network sub-models are embedded in a larger stacking ensemble model for training and prediction.

Let’s get started.

How to Develop a Stacking Ensemble for Deep Learning Neural Networks in Python With Keras

How to Develop a Stacking Ensemble for Deep Learning Neural Networks in Python With Keras
Photo by David Law, some rights reserved.

Tutorial Overview

This tutorial is divided into six parts; they are:

  1. Stacked Generalization Ensemble
  2. Multi-Class Classification Problem
  3. Multilayer Perceptron Model
  4. Train and Save Sub-Models
  5. Separate Stacking Model
  6. Integrated Stacking Model

Stacked Generalization Ensemble

A model averaging ensemble combines the predictions from multiple trained models.

A limitation of this approach is that each model contributes the same amount to the ensemble prediction, regardless of how well the model performed. A variation of this approach, called a weighted average ensemble, weighs the contribution of each ensemble member by the trust or expected performance of the model on a holdout dataset. This allows well-performing models to contribute more and less-well-performing models to contribute less. The weighted average ensemble provides an improvement over the model average ensemble.

A further generalization of this approach is replacing the linear weighted sum (e.g. linear regression) model used to combine the predictions of the sub-models with any learning algorithm. This approach is called stacked generalization, or stacking for short.

In stacking, an algorithm takes the outputs of sub-models as input and attempts to learn how to best combine the input predictions to make a better output prediction.

It may be helpful to think of the stacking procedure as having two levels: level 0 and level 1.

  • Level 0: The level 0 data is the training dataset inputs and level 0 models learn to make predictions from this data.
  • Level 1: The level 1 data takes the output of the level 0 models as input and the single level 1 model, or meta-learner, learns to make predictions from this data.

Stacked generalization works by deducing the biases of the generalizer(s) with respect to a provided learning set. This deduction proceeds by generalizing in a second space whose inputs are (for example) the guesses of the original generalizers when taught with part of the learning set and trying to guess the rest of it, and whose output is (for example) the correct guess.

Stacked generalization, 1992.

Unlike a weighted average ensemble, a stacked generalization ensemble can use the set of predictions as a context and conditionally decide to weigh the input predictions differently, potentially resulting in better performance.

Interestingly, although stacking is described as an ensemble learning method with two or more level 0 models, it can be used in the case where there is only a single level 0 model. In this case, the level 1, or meta-learner, model learns to correct the predictions from the level 0 model.

… although it can also be used when one has only a single generalizer, as a technique to improve that single generalizer

Stacked generalization, 1992.

It is important that the meta-learner is trained on a separate dataset to the examples used to train the level 0 models to avoid overfitting.

A simple way that this can be achieved is by splitting the training dataset into a train and validation set. The level 0 models are then trained on the train set. The level 1 model is then trained using the validation set, where the raw inputs are first fed through the level 0 models to get predictions that are used as inputs to the level 1 model.

A limitation of the hold-out validation set approach to training a stacking model is that level 0 and level 1 models are not trained on the full dataset.

A more sophisticated approach to training a stacked model involves using k-fold cross-validation to develop the training dataset for the meta-learner model. Each level 0 model is trained using k-fold cross-validation (or even leave-one-out cross-validation for maximum effect); the models are then discarded, but the predictions are retained. This means for each model, there are predictions made by a version of the model that was not trained on those examples, e.g. like having holdout examples, but in this case for the entire training dataset.

The predictions are then used as inputs to train the meta-learner. Level 0 models are then trained on the entire training dataset and together with the meta-learner, the stacked model can be used to make predictions on new data.

In practice, it is common to use different algorithms to prepare each of the level 0 models, to provide a diverse set of predictions.

… stacking is not normally used to combine models of the same type […] it is applied to models built by different learning algorithms.

Practical Machine Learning Tools and Techniques, Second Edition, 2005.

It is also common to use a simple linear model to combine the predictions. Because use of a linear model is common, stacking is more recently referred to as “model blending” or simply “blending,” especially in machine learning competitions.

… the multi-response least squares linear regression technique should be employed as the high-level generalizer. This technique provides a method of combining level-0 models’ confidence

Issues in Stacked Generalization, 1999.

A stacked generalization ensemble can be developed for regression and classification problems. In the case of classification problems, better results have been seen when using the prediction of class probabilities as input to the meta-learner instead of class labels.

… class probabilities should be used instead of the single predicted class as input attributes for higher-level learning. The class probabilities serve as the confidence measure for the prediction made.

Issues in Stacked Generalization, 1999.

Now that we are familiar with stacked generalization, we can work through a case study of developing a stacked deep learning model.

Want Better Results with Deep Learning?

Take my free 7-day email crash course now (with sample code).

Click to sign-up and also get a free PDF Ebook version of the course.

Download Your FREE Mini-Course

Multi-Class Classification Problem

We will use a small multi-class classification problem as the basis to demonstrate the stacking ensemble.

The scikit-learn class provides the make_blobs() function that can be used to create a multi-class classification problem with the prescribed number of samples, input variables, classes, and variance of samples within a class.

The problem has two input variables (to represent the x and y coordinates of the points) and a standard deviation of 2.0 for points within each group. We will use the same random state (seed for the pseudorandom number generator) to ensure that we always get the same data points.

The results are the input and output elements of a dataset that we can model.

In order to get a feeling for the complexity of the problem, we can graph each point on a two-dimensional scatter plot and color each point by class value.

The complete example is listed below.

Running the example creates a scatter plot of the entire dataset. We can see that the standard deviation of 2.0 means that the classes are not linearly separable (separable by a line) causing many ambiguous points.

This is desirable as it means that the problem is non-trivial and will allow a neural network model to find many different “good enough” candidate solutions, resulting in a high variance.

Scatter Plot of Blobs Dataset With Three Classes and Points Colored by Class Value

Scatter Plot of Blobs Dataset With Three Classes and Points Colored by Class Value

Multilayer Perceptron Model

Before we define a model, we need to contrive a problem that is appropriate for the stacking ensemble.

In our problem, the training dataset is relatively small. Specifically, there is a 10:1 ratio of examples in the training dataset to the holdout dataset. This mimics a situation where we may have a vast number of unlabeled examples and a small number of labeled examples with which to train a model.

We will create 1,100 data points from the blobs problem. The model will be trained on the first 100 points and the remaining 1,000 will be held back in a test dataset, unavailable to the model.

The problem is a multi-class classification problem, and we will model it using a softmax activation function on the output layer. This means that the model will predict a vector with three elements with the probability that the sample belongs to each of the three classes. Therefore, we must one hot encode the class values before we split the rows into the train and test datasets. We can do this using the Keras to_categorical() function.

Next, we can define and combine the model.

The model will expect samples with two input variables. The model then has a single hidden layer with 25 nodes and a rectified linear activation function, then an output layer with three nodes to predict the probability of each of the three classes and a softmax activation function.

Because the problem is multi-class, we will use the categorical cross entropy loss function to optimize the model and the efficient Adam flavor of stochastic gradient descent.

The model is fit for 500 training epochs and we will evaluate the model each epoch on the test set, using the test set as a validation set.

At the end of the run, we will evaluate the performance of the model on the train and test sets.

Then finally, we will plot learning curves of the model accuracy over each training epoch on both the training and validation datasets.

Tying all of this together, the complete example is listed below.

Running the example first prints the shape of each dataset for confirmation, then the performance of the final model on the train and test datasets.

Your specific results will vary (by design!) given the high variance nature of the model.

In this case, we can see that the model achieved about 85% accuracy on the training dataset, which we know is optimistic, and about 80% on the test dataset, which we would expect to be more realistic.

A line plot is also created showing the learning curves for the model accuracy on the train and test sets over each training epoch.

We can see that training accuracy is more optimistic over most of the run as we also noted with the final scores.

Line Plot Learning Curves of Model Accuracy on Train and Test Dataset Over Each Training Epoch

Line Plot Learning Curves of Model Accuracy on Train and Test Dataset Over Each Training Epoch

We can now look at using instances of this model as part of a stacking ensemble.

Train and Save Sub-Models

To keep this example simple, we will use multiple instances of the same model as level-0 or sub-models in the stacking ensemble.

We will also use a holdout validation dataset to train the level-1 or meta-learner in the ensemble.

A more advanced example may use different types of MLP models (deeper, wider, etc.) as sub-models and train the meta-learner using k-fold cross-validation.

In this section, we will train multiple sub-models and save them to file for later use in our stacking ensembles.

The first step is to create a function that will define and fit an MLP model on the training dataset.

Next, we can create a sub-directory to store the models.

Note, if the directory already exists, you may have to delete it when re-running this code.

Finally, we can create multiple instances of the MLP and save each to the “models/” subdirectory with a unique filename.

In this case, we will create five sub-models, but you can experiment with a different number of models and see how it impacts model performance.

We can tie all of these elements together; the complete example of training the sub-models and saving them to file is listed below.

Running the example creates the “models/” subfolder and saves five trained models with unique filenames.

Next, we can look at training a meta-learner to make best use of the predictions from these submodels.

Separate Stacking Model

We can now train a meta-learner that will best combine the predictions from the sub-models and ideally perform better than any single sub-model.

The first step is to load the saved models.

We can use the load_model() Keras function and create a Python list of loaded models.

We can call this function to load our five saved models from the “models/” sub-directory.

It would be useful to know how well the single models perform on the test dataset as we would expect a stacking model to perform better.

We can easily evaluate each single model on the training dataset and establish a baseline of performance.

Next, we can train our meta-learner. This requires two steps:

  • Prepare a training dataset for the meta-learner.
  • Use the prepared training dataset to fit a meta-learner model.

We will prepare a training dataset for the meta-learner by providing examples from the test set to each of the submodels and collecting the predictions. In this case, each model will output three predictions for each example for the probabilities that a given example belongs to each of the three classes. Therefore, the 1,000 examples in the test set will result in five arrays with the shape [1000, 3].

We can combine these arrays into a three-dimensional array with the shape [1000, 5, 3] by using the dstack() NumPy function that will stack each new set of predictions.

As input for a new model, we will require 1,000 examples with some number of features. Given that we have five models and each model makes three predictions per example, then we would have 15 (3 x 5) features for each example provided to the submodels. We can transform the [1000, 5, 3] shaped predictions from the sub-models into a [1000, 15] shaped array to be used to train a meta-learner using the reshape() NumPy function and flattening the final two dimensions. The stacked_dataset() function implements this step.

Once prepared, we can use this input dataset along with the output, or y part, of the test set to train a new meta-learner.

In this case, we will train a simple logistic regression algorithm from the scikit-learn library.

Logistic regression only supports binary classification, although the implementation of logistic regression in scikit-learn in the LogisticRegression class supports multi-class classification (more than two classes) using a one-vs-rest scheme. The function fit_stacked_model() below will prepare the training dataset for the meta-learner by calling the stacked_dataset() function, then fit a logistic regression model that is then returned.

We can call this function and pass in the list of loaded models and the training dataset.

Once fit, we can use the stacked model, including the members and the meta-learner, to make predictions on new data.

This can be achieved by first using the sub-models to make an input dataset for the meta-learner, e.g. by calling the stacked_dataset() function, then making a prediction with the meta-learner. The stacked_prediction() function below implements this.

We can use this function to make a prediction on new data; in this case, we can demonstrate it by making predictions on the test set.

Tying all of these elements together, the complete example of fitting a linear meta-learner for the stacking ensemble of MLP sub-models is listed below.

Running the example first loads the sub-models into a list and evaluates the performance of each.

We can see that the best performing model is the final model with an accuracy of about 81.3%.

Your specific results may vary given the stochastic nature of the neural network learning algorithm.

Next, a logistic regression meta-learner is trained on the predicted probabilities from each sub-model on the test set, then the entire stacking model is evaluated on the test set.

We can see that in this case, the meta-learner out-performed each of the sub-models on the test set, achieving an accuracy of about 82.4%.

Integrated Stacking Model

When using neural networks as sub-models, it may be desirable to use a neural network as a meta-learner.

Specifically, the sub-networks can be embedded in a larger multi-headed neural network that then learns how to best combine the predictions from each input sub-model. It allows the stacking ensemble to be treated as a single large model.

The benefit of this approach is that the outputs of the submodels are provided directly to the meta-learner. Further, it is also possible to update the weights of the submodels in conjunction with the meta-learner model, if this is desirable.

This can be achieved using the Keras functional interface for developing models.

After the models are loaded as a list, a larger stacking ensemble model can be defined where each of the loaded models is used as a separate input-head to the model. This requires that all of the layers in each of the loaded models be marked as not trainable so the weights cannot be updated when the new larger model is being trained. Keras also requires that each layer has a unique name, therefore the names of each layer in each of the loaded models will have to be updated to indicate to which ensemble member they belong.

Once the sub-models have been prepared, we can define the stacking ensemble model.

The input layer for each of the sub-models will be used as a separate input head to this new model. This means that k copies of any input data will have to be provided to the model, where k is the number of input models, in this case, 5.

The outputs of each of the models can then be merged. In this case, we will use a simple concatenation merge, where a single 15-element vector will be created from the three class-probabilities predicted by each of the 5 models.

We will then define a hidden layer to interpret this “input” to the meta-learner and an output layer that will make its own probabilistic prediction. The define_stacked_model() function below implements this and will return a stacked generalization neural network model given a list of trained sub-models.

A plot of the network graph is created when this function is called to give an idea of how the ensemble model fits together.

Creating the plot requires that pygraphviz is installed.

If this is a challenge on your workstation, you can comment out the call to the plot_model() function.

Visualization of Stacked Generalization Ensemble of Neural Network Models

Visualization of Stacked Generalization Ensemble of Neural Network Models

Once the model is defined, it can be fit. We can fit it directly on the holdout test dataset.

Because the sub-models are not trainable, their weights will not be updated during training and only the weights of the new hidden and output layer will be updated. The fit_stacked_model() function below will fit the stacking neural network model on for 300 epochs.

We can call this function providing the defined stacking model and the test dataset.

Once fit, we can use the new stacked model to make a prediction on new data.

This is as simple as calling the predict() function on the model. One minor change is that we require k copies of the input data in a list to be provided to the model for each of the k sub-models. The predict_stacked_model() function below simplifies this process of making a prediction with the stacking model.

We can call this function to make a prediction for the test dataset and report the accuracy.

We would expect the performance of the neural network learner to be better than any individual submodel and perhaps competitive with the linear meta-learner used in the previous section.

Tying all of these elements together, the complete example is listed below.

Running the example first loads the five sub-models.

A larger stacking ensemble neural network is defined and fit on the test dataset, then the new model is used to make a prediction on the test dataset. We can see that, in this case, the model achieved an accuracy of about 83.3%, out-performing the linear model from the previous section.


This section lists some ideas for extending the tutorial that you may wish to explore.

  • Alternate Meta-Learner. Update the example to use an alternate meta-learner classifier model to the logistic regression model.
  • Single Level 0 Model. Update the example to use a single level-0 model and compare the results.
  • Vary Level 0 Models. Develop a study that demonstrates the relationship between test classification accuracy and the number of sub-models used in the stacked ensemble.
  • Cross-Validation Stacking Ensemble. Update the example to use k-fold cross-validation to prepare the training dataset for the meta-learner model.
  • Use Raw Input in Meta-Learner. Update the example so that the meta-learner algorithms take the raw input data for the sample as well as the output from the sub-models and compare performance.

If you explore any of these extensions, I’d love to know.

Further Reading

This section provides more resources on the topic if you are looking to go deeper.







In this tutorial, you discovered how to develop a stacked generalization ensemble for deep learning neural networks.

Specifically, you learned:

  • Stacked generalization is an ensemble method where a new model learns how to best combine the predictions from multiple existing models.
  • How to develop a stacking model using neural networks as a submodel and a scikit-learn classifier as the meta-learner.
  • How to develop a stacking model where neural network sub-models are embedded in a larger stacking ensemble model for training and prediction.

Do you have any questions?
Ask your questions in the comments below and I will do my best to answer.

Develop Better Deep Learning Models Today!

Better Deep Learning

Train Faster, Reduce Overftting, and Ensembles

…with just a few lines of python code

Discover how in my new Ebook:
Better Deep Learning

It provides self-study tutorials on topics like: weight decay, batch normalization, dropout, model stacking and much more…

Bring better deep learning to your projects!

Skip the Academics. Just Results.

Click to learn more.

52 Responses to How to Develop a Stacking Ensemble for Deep Learning Neural Networks in Python With Keras

  1. Sivarama Krishnan Rajaraman January 1, 2019 at 8:01 am #

    Hi Jason,
    Awesome post. I tried to perform a stacking ensemble for a binary classification task. However, I have an issue with the stacked_dataset definition. It gives a value error saying “bad_shape (200, 2)”. I have 200 test samples with 2 classes. Kindly suggest the modification needed for the stacked_dataset definition. Many thanks.

    • Jason Brownlee January 1, 2019 at 11:13 am #

      You may have to adapt the example to your specific models and dataset.

  2. Andre January 2, 2019 at 12:42 am #

    Thank you

  3. surya January 2, 2019 at 1:37 am #


  4. Simon Li January 2, 2019 at 5:04 pm #

    Since you uses multiple same height models, is it possible to use different sub models like different layers or different type, VGG, Inception etc. as the sub models?
    Another question is, if we shuffle the data sets between the sub models, how possible the stacking ensemble model over fitting?

    • Jason Brownlee January 3, 2019 at 6:09 am #

      Yes, you can use models of varying depth.

      Fitting each model on separate data would no longer be a stacking model, but instead a bagging type model.

  5. Dcart January 4, 2019 at 10:43 pm #

    Thank you Jason. Although I need to digest all you have written as I am a newbie this this field, I appreciate your effort in sharing your knowledge.

  6. sanjie January 6, 2019 at 7:28 pm #

    Thank you Jason.
    I also appreciate your effort in sharing your knowledge.

  7. Dave January 7, 2019 at 3:52 am #

    Thanks for this article, Jason. I tried the extension “Use Raw Input in Meta-Learner.” My understanding is that you take the output of the 0-level models, and then join that to the data that were used to make the output (predictions) of the 0-level models. So the input to the meta-model would be (n,X+Y), where n is the number of observations in the validation set, X is the number of features in the raw data, and Y is the number of 0-level models (for a binary classification problem). My intuition would be that this is over fitting, and it looks like it may be, based on the meta-model performance. Wouldn’t the meta-model then be getting “information” from that raw data twice? Once indirectly through the 0-level models, and then again through the raw data? Or is this legit because the “raw data” being provided to the meta-model was only used to validate, not train, the 0-level models?

    Thanks again for your article (and all your other ones!) I find them very useful.

    • Jason Brownlee January 7, 2019 at 6:39 am #

      The input would be the input to the level 0 models (X) and the output from each level 0 model (yhats).

      The raw input (X) provides additional context for the level 0 outputs (yhats)

      It may or may not overfit. Often it results in better performance.

  8. Andrew January 7, 2019 at 8:07 am #

    Hi Jason,

    Thank you for the interesting post. I have a quick question on the accuracy scores between base-learners and meta-learn.

    For base-learners, you used (trainX, tainY) to train them, and evaluate acc on (testX, testY).
    For meta-learners, you used (testX, testY) to train them, and evaluate acc on (testX, testY) again. Wouldn’t the acc for meta-learners be inflated? Would it be better to have a ‘true hold-out set’ that none of the base- and meta-learner have seen?


  9. Abhijit January 10, 2019 at 5:00 pm #

    thank you sir , can you suggest some machine learning algorithms using MATLAB, particularly regarding Reproducing Hilbert kernel space and implementation of kernel trick .

  10. Oliver February 20, 2019 at 5:58 pm #

    A great post indeed. What I don’t fully understand is, why the test data is used to fit the stacked model, and then the stacked model is evaluated against the test data? Isn’t it overfitting the ensemble model?

    Another error I happened to have, is when I loading models some of them happened to have the same layer names. The renaming function however doesn’t work with the input layers, so the code of ‘Model(inputs=ensemble_inputs…)’ throws an error saying there are duplicates in input layer names.

    • Jason Brownlee February 21, 2019 at 7:52 am #

      Yes, that is the clever part. Each round uses a different holdout set to find the coefficients for combining the models – together all data was unseen in the estimation of the coefficients. It should not overfit, ideally.

      Perhaps double check that your version of Keras is up to date.

      • Oliver February 21, 2019 at 4:51 pm #

        Just to clarify what do you mean by ‘each round’, as in the last example where the NN is used as a meta-learner, there is no cross-validation deployed. I understand the sub-models are stacked together and the training is done at one go.

        And I am using the latest Keras 2.2.4 version.

        • Jason Brownlee February 22, 2019 at 6:13 am #

          In that case, we load the pre-trained models and fit the data using a “new” dataset not used to train the submodels, e.g. in this case the test dataset.

          Ideally, we would not use the test dataset and instead use a subset of the train dataset not seen by the submodels.

          • Oliver February 22, 2019 at 12:21 pm #

            Agree. So in this example above, the stacked model is trained on test dataset and then evaluated against test dataset isn’t ideal and potentially overfits the test dataset?

          • Jason Brownlee February 22, 2019 at 2:47 pm #

            Yes, correct.

  11. Oliver February 22, 2019 at 4:59 pm #

    Then could you please clarify it in the post as it’s misleading to readers. The cross-validation shouldn’t be optional in this case but a must in order to generate inputs to the meta-learner. If there are N rows in the training set and L sub-models, the cross-validation should first produce a N * L matrix, together with the y_train to train the meta-learner.

    That means a cross-validation needs to be implemented in the ‘fit_stacked_model’ function, and input trainX and trainY instead.

    • Jason Brownlee February 23, 2019 at 6:27 am #

      Thanks for the suggestion.

      • Oliver February 23, 2019 at 9:54 pm #

        My pleasure. Thanks for the great post. Keep it up.

  12. Wolfgang Reuter February 26, 2019 at 2:08 am #

    Hi Jason,

    thanks for the great post! I would like to use it to stack my own neural networks. They are also Sequential models – but I get the following issue: Two of the input tensors have the same name. And as tensors can have no name change, the code you provided to change the layer names doesn’t alter them. Hence I get the error:

    ValueError: The name “dense_7_input” is used 2 times in the model. All layer names should be unique.

    I have tried quite a few things – but no success. However, I don’t really care about the actual models, I could retrain them and alter the name of the input while setting them up. But I have found no way to give the input tensor created within the input layer of a Sequential model a custom name.

    What I can’t do though is to create all five (yes, I also have five models to concatenate, by chance) in one single loop as training one of them takes up to several hours. Have you got any suggestions how I could find a workaround? I would be very greatful.

    best regards and thanks a lot again and in advance,


    • Jason Brownlee February 26, 2019 at 6:28 am #

      You can specify the name of layers when the layer is created via the name attribute.

      You can also alter the name attribute of a layer after it is loaded, then save it again.

      • Wolfgang Reuter February 27, 2019 at 3:59 am #

        Hi Jason,

        thanks a lot for your help. I couldn’t change the name of the attribute – but I sorted it differently. If I save the weights of each model, set up a new model from scratch (with explicitly naming the first layer, i.e. not renaming layers in a loop) and then add the weights to the reloaded model. It works, I am now testing it. Thanks again and best regards, Wolfgang

        • Jason Brownlee February 27, 2019 at 7:35 am #

          Nice workaround, thanks for sharing.

          • Wolfgang Reuter February 28, 2019 at 1:52 am #

            Hi Jason

            thanks for your comment – and sorry to bother you again. I have another question and I wonder whether you have any “high level” tip as to where I could look up what to do.

            For my project (in computer vision) I have five feed forward nets with features extracted of images – and one (later maybe two or three) CNNs.

            I could concatenate the feed forward nets and the overall accuracy, basically using the code of your blog, increases by about 1.5 percent in comparison to my previous method of combining the output of each feed forward net. Previously I averaged the softmax probabilities (see Geoffrey Hintons course on Coursera, lecture 10 for more details), finding the best weight for each net with a brute force approach on the saved softmax output of the validation set, so that the model is not optimized on the test set.

            However, if I average the softmax output probability predictions of the stacked model with my CNN softmax output probabilities, the overall accuracy drops by about two percent (even though the stacked feed forward accuracy is better than before) with respect to my previous method. Apart from the fact that I wonder why this is, I thought maybe I could add the CNN to the stacked model and see whether that improves the overall accuracy.

            The problem is: For the CNN I have to (for memory reasons) use a data generator – and I don’t know how I could set up a stacked model with that.

            Do you have any clues whether stacking a CNN and some feed forward nets, the first using a data generator and the others not, is possible at all? And if so – do you have a reference page, a blog post or something like it where I could look it up?

            My second question is related to that: Could I use the saved output probabilities of each net to set up a stacked neural network (as this would get around the data generator problem).

            Thanks again for your kind help,

            best regards,


          • Jason Brownlee February 28, 2019 at 6:44 am #

            Hmmm, if each model takes the same input, then each model can use the same data generator.

            Otherwise, you may have to write custom code to generate samples and feed them through your models sample by sample – I guess. Some prototyping might be required.

            Yes, you can use saved out of sample predictions to fit a stacked model or a weighting for predictions from multiple models, e.g.:

  13. Wonbin February 28, 2019 at 5:06 am #

    Thanks for this post!

    I still got the ‘unique layer name’ error message even though I set the new layer names on each level 0 model as you did above. I didn’t almost changed your code.

    The error message was like this
    ValueError: The name “dense_1_input” is used 5 times in the model. All layer names should be unique.

    But when I tried to find layers that have that name none of the layers has that name
    >>> ValueError: No such layer: dense_1_input

    • Wonbin February 28, 2019 at 5:26 am #

      Now I got it! We should add one more line.

      model.input_names = ‘ensemble_’ + str(i+1) + ‘_’ + model.input_names

      • Wonbin February 28, 2019 at 5:44 am #

        OMG that didn’t work……………………

        • Jason Brownlee February 28, 2019 at 6:46 am #

          Hang in there, I’m sure it is a small bug that once fixed will fix your whole model.

    • Jason Brownlee February 28, 2019 at 6:46 am #

      Perhaps summarize (e.g. model.summary()) the model to confirm the layer names are indeed different prior to stacking?

      • Wonbin February 28, 2019 at 11:24 pm #

        Already check that, the names were all changed but I got still the same error.

        I want to ask you another question, If I do 10 fold cross validation then I’d get 10 (level 0) models. To make predictions on test set (unseen data) at the last step, which model among the 10 (level 0) models should I feed the testX to?

        • Wonbin March 1, 2019 at 12:08 am #

          If I used 5 (level 0) models based on a holdout dataset, then in the 10-fold CV case the total # of models would be 5*10, right? and after feeding the test set to the all 10 pairs I’d get 10 times more many yhat.. (but actual y values can’t be prepared 10 times more many)

        • Jason Brownlee March 1, 2019 at 6:21 am #

          I’m not sure I follow, why would you feed testX to model trained on 9/10s of your data?

          • Wonbin March 1, 2019 at 1:52 pm #

            Because the level 1 model was trained by the data that is fed through the level 0 models? So if I want to make predictions on the test set, I have to feed the test set to the level 0 models and then to the level 1 model sequentially. Is this correct?

          • Jason Brownlee March 1, 2019 at 2:23 pm #

            Yes, but there is no k-fold split involved.

  14. Anna V. March 1, 2019 at 7:59 pm #

    Hi Jason! Thank you for sharing your approach. I’m trying to find the best way to solve my problem and will really appreciate your suggestions. I have 3 different NNs doing the same predictions but using different data sets: text, images, and metadata. Would your way of stacking them together with another NN work? Thanks!

    • Jason Brownlee March 2, 2019 at 9:31 am #

      Perhaps try it and see how results compare to a simple average of the three models?

      • Anna V. March 2, 2019 at 9:30 pm #

        What should I pass to the final model as testX in fit_stacked_model(stacked_model, testX, testy) in my case?

  15. kay March 4, 2019 at 8:07 am #

    Hi, Jason. It’s very informative contents.
    I have a question about the shape of stackX. Isn’t it [1000, 3, 5] with stackX shape although you mentioned [1000, 5, 3] in stacked_dataset function? Because I understand as [1000,3] *5 (yhat * 5 models).

    • Jason Brownlee March 4, 2019 at 2:13 pm #

      Yes, this is covered in the section “Separate Stacking Model”.

      The shape is [1000, 5, 3] for the 5 models that make a prediction for 3 classes.

  16. Max March 7, 2019 at 8:28 am #

    Thanks for the tuts!

    logistic regression give a value error bad input shape. According to doc, fit() expect y : array-like, shape (n_samples,) but testy here is of shape (1000, 3). What am I missing?

Leave a Reply