Archive | Deep Learning Performance

Line Plot Learning Curves of Model Accuracy on Train and Test Dataset over Each Training Epoch

How to Develop a Weighted Average Ensemble for Deep Learning Neural Networks

A modeling averaging ensemble combines the prediction from each model equally and often results in better performance on average than a given single model. Sometimes there are very good models that we wish to contribute more to an ensemble prediction, and perhaps less skillful models that may be useful but should contribute less to an […]

Continue Reading 88
Line Plot Showing Single Model Accuracy (blue dots) vs Accuracy of Ensembles of Varying Size With a Horizontal Voting Ensemble

How to Develop a Horizontal Voting Deep Learning Ensemble to Reduce Variance

Predictive modeling problems where the training dataset is small relative to the number of unlabeled examples are challenging. Neural networks can perform well on these types of problems, although they can suffer from high variance in model performance as measured on a training or hold-out validation datasets. This makes choosing which model to use as […]

Continue Reading 10
Line Plot Showing Single Model Accuracy (blue dots) vs Accuracy of Ensembles of Varying Size for Bagging

How to Create a Bagging Ensemble of Deep Learning Models in Keras

Ensemble learning are methods that combine the predictions from multiple models. It is important in ensemble learning that the models that comprise the ensemble are good, making different prediction errors. Predictions that are good in different ways can result in a prediction that is both more stable and often better than the predictions of any […]

Continue Reading 55
Line Plot Learning Curves of Model Accuracy on Train and Test Dataset Over Each Training Epoch

How to Develop an Ensemble of Deep Learning Models in Keras

Deep learning neural network models are highly flexible nonlinear algorithms capable of learning a near infinite number of mapping functions. A frustration with this flexibility is the high variance in a final model. The same neural network model trained on the same dataset may find one of many different possible “good enough” solutions each time […]

Continue Reading 49
Ensemble Methods to Reduce Variance and Improve Performance of Deep Learning Neural Networks

Ensemble Learning Methods for Deep Learning Neural Networks

How to Improve Performance By Combining Predictions From Multiple Models. Deep learning neural networks are nonlinear methods. They offer increased flexibility and can scale in proportion to the amount of training data available. A downside of this flexibility is that they learn via a stochastic training algorithm which means that they are sensitive to the […]

Continue Reading 45