Archive | Deep Learning Performance

Line Plot Classification Accuracy of MLP With Batch Normalization After Activation Function on Train and Test Datasets Over Training Epochs

How to Accelerate Learning of Deep Neural Networks With Batch Normalization

Batch normalization is a technique designed to automatically standardize the inputs to a layer in a deep learning neural network. Once implemented, batch normalization has the effect of dramatically accelerating the training process of a neural network, and in some cases improves the performance of the model via a modest regularization effect. In this tutorial, […]

Continue Reading
How to Calibrate Probabilities for Imbalanced Classification

A Gentle Introduction to Batch Normalization for Deep Neural Networks

Training deep neural networks with tens of layers is challenging as they can be sensitive to the initial random weights and configuration of the learning algorithm. One possible reason for this difficulty is the distribution of the inputs to layers deep in the network may change after each mini-batch when the weights are updated. This […]

Continue Reading
Line Plot of Cosine Annealing Learning Rate Schedule

Snapshot Ensemble Deep Learning Neural Network in Python

Model ensembles can achieve lower generalization error than single models but are challenging to develop with deep learning neural networks given the computational cost of training each single model. An alternative is to train multiple model snapshots during a single training run and combine their predictions to make an ensemble prediction. A limitation of this […]

Continue Reading
Four Scatter Plots of the Circles Dataset Varied by the Amount of Statistical Noise

Impact of Dataset Size on Deep Learning Model Skill And Performance Estimates

Supervised learning is challenging, although the depths of this challenge are often learned then forgotten or willfully ignored. This must be the case, because dwelling too long on this challenge may result in a pessimistic outlook. In spite of the challenge, we continue to wield supervised learning algorithms and they perform well in practice. Fundamental […]

Continue Reading
Visualization of Stacked Generalization Ensemble of Neural Network Models

Stacking Ensemble for Deep Learning Neural Networks in Python

Model averaging is an ensemble technique where multiple sub-models contribute equally to a combined prediction. Model averaging can be improved by weighting the contributions of each sub-model to the combined prediction by the expected performance of the submodel. This can be extended further by training an entirely new model to learn how to best combine […]

Continue Reading