Archive | Better Deep Learning

Line Plot Showing Single Model Accuracy (blue dots) vs Accuracy of Ensembles of Varying Size With a Horizontal Voting Ensemble

How to Reduce Variance in the Final Deep Learning Model With a Horizontal Voting Ensemble

Predictive modeling problems where the training dataset is small relative to the number of unlabeled examples are challenging. Neural networks can perform well on these types of problems, although they can suffer from high variance in model performance as measured on a training or hold-out validation datasets. This makes choosing which model to use as […]

Continue Reading 0
Line Plot Showing Single Model Accuracy (blue dots) vs Accuracy of Ensembles of Varying Size for Bagging

How to Create a Random-Split, Cross-Validation, and Bagging Ensemble for Deep Learning in Keras

Ensemble learning are methods that combine the predictions from multiple models. It is important in ensemble learning that the models that comprise the ensemble are good, making different prediction errors. Predictions that are good in different ways can result in a prediction that is both more stable and often better than the predictions of any […]

Continue Reading 11
Line Plot Learning Curves of Model Accuracy on Train and Test Dataset Over Each Training Epoch

How to Reduce the Variance of Deep Learning Models in Keras Using Model Averaging Ensembles

Deep learning neural network models are highly flexible nonlinear algorithms capable of learning a near infinite number of mapping functions. A frustration with this flexibility is the high variance in a final model. The same neural network model trained on the same dataset may find one of many different possible “good enough” solutions each time […]

Continue Reading 4
Ensemble Methods to Reduce Variance and Improve Performance of Deep Learning Neural Networks

Ensemble Methods for Deep Learning Neural Networks to Reduce Variance and Improve Performance

Deep learning neural networks are nonlinear methods. They offer increased flexibility and can scale in proportion to the amount of training data available. A downside of this flexibility is that they learn via a stochastic training algorithm which means that they are sensitive to the specifics of the training data and may find a different […]

Continue Reading 2
Line Plots of Accuracy on Train and Test Datasets While Training With Dropout Regularization

How to Reduce Overfitting With Dropout Regularization in Keras

Dropout regularization is a computationally cheap way to regularize a deep neural network. Dropout works by probabilistically removing, or “dropping out,” inputs to a layer, which may be input variables in the data sample or activations from a previous layer. It has the effect of simulating a large number of networks with very different network […]

Continue Reading 0