Framework for Better Deep Learning

Framework for Better Deep Learning

Modern deep learning libraries such as Keras allow you to define and start fitting a wide range of neural network models in minutes with just a few lines of code. Nevertheless, it is still challenging to configure a neural network to get good performance on a new predictive modeling problem. The challenge of getting good […]

Continue Reading 2
Loss and Accuracy Learning Curves on the Train and Test Sets for an MLP on Problem 1

How to Improve Performance With Transfer Learning for Deep Learning Neural Networks

An interesting benefit of deep learning neural networks is that they can be reused on related problems. Transfer learning refers to a technique for predictive modeling on a different but somehow similar problem that can then be reused partly or wholly to accelerate the training and improve the performance of a model on the problem […]

Continue Reading 4
How to Avoid Exploding Gradients in Neural Networks With Gradient Clipping

How to Avoid Exploding Gradients in Neural Networks With Gradient Clipping

Training a neural network can become unstable given the choice of error function, learning rate, or even the scale of the target variable. Large updates to weights during training can cause a numerical overflow or underflow often referred to as “exploding gradients.” The problem of exploding gradients is more common with recurrent neural networks, such […]

Continue Reading 2
Line Plot for Supervised Greedy Layer-Wise Pretraining Showing Model Layers vs Train and Test Set Classification Accuracy on the Blobs Classification Problem

How to Develop Deep Learning Neural Networks With Greedy Layer-Wise Pretraining

Training deep neural networks was traditionally challenging as the vanishing gradient meant that weights in layers close to the input layer were not updated in response to errors calculated on the training dataset. An innovation and important milestone in the field of deep learning was greedy layer-wise pretraining that allowed very deep neural networks to […]

Continue Reading 12
Line Plots of KL Divergence Loss and Classification Accuracy over Training Epochs on the Blobs Multi-Class Classification Problem

How to Choose Loss Functions When Training Deep Learning Neural Networks

Deep learning neural networks are trained using the stochastic gradient descent optimization algorithm. As part of the optimization algorithm, the error for the current state of the model must be estimated repeatedly. This requires the choice of an error function, conventionally called a loss function, that can be used to estimate the loss of the […]

Continue Reading 10
Line Plots of Train and Test Accuracy for a Suite of Learning Rates on the Blobs Classification Problem

Understand the Impact of Learning Rate on Model Performance With Deep Learning Neural Networks

Deep learning neural networks are trained using the stochastic gradient descent optimization algorithm. The learning rate is a hyperparameter that controls how much to change the model in response to the estimated error each time the model weights are updated. Choosing the learning rate is challenging as a value too small may result in a […]

Continue Reading 8