Archive | Deep Learning Performance

Line Plots of Accuracy on Train and Test Datasets While Training With Dropout Regularization

How to Reduce Overfitting With Dropout Regularization in Keras

Dropout regularization is a computationally cheap way to regularize a deep neural network. Dropout works by probabilistically removing, or “dropping out,” inputs to a layer, which may be input variables in the data sample or activations from a previous layer. It has the effect of simulating a large number of networks with very different network […]

Continue Reading
A Gentle Introduction to Dropout for Regularizing Deep Neural Networks

A Gentle Introduction to Dropout for Regularizing Deep Neural Networks

Deep learning neural networks are likely to quickly overfit a training dataset with few examples. Ensembles of neural networks with different model configurations are known to reduce overfitting, but require the additional computational expense of training and maintaining multiple models. A single model can be used to simulate having a large number of different network […]

Continue Reading
Scatter Plot of Circles Dataset with Color Showing the Class Value of Each Sample

How to Reduce Generalization Error With Activity Regularization in Keras

Activity regularization provides an approach to encourage a neural network to learn sparse features or internal representations of raw observations. It is common to seek sparse learned representations in autoencoders, called sparse autoencoders, and in encoder-decoder models, although the approach can also be used generally to reduce overfitting and improve a model’s ability to generalize […]

Continue Reading
Activation Regularization for Reducing Generalization Error in Deep Learning Neural Networks

A Gentle Introduction to Activation Regularization in Deep Learning

Deep learning models are capable of automatically learning a rich internal representation from raw input data. This is called feature or representation learning. Better learned representations, in turn, can lead to better insights into the domain, e.g. via visualization of learned features, and to better predictive models that make use of the learned features. A […]

Continue Reading
How to Configure the Number of Layers and Nodes in a Neural Network

How to Configure the Number of Layers and Nodes in a Neural Network

Artificial neural networks have two main hyperparameters that control the architecture or topology of the network: the number of layers and the number of nodes in each hidden layer. You must specify values for these parameters when configuring your network. The most reliable way to configure these hyperparameters for your specific predictive modeling problem is […]

Continue Reading
Comparison of Adam to Other Optimization Algorithms Training a Multilayer Perceptron

Gentle Introduction to the Adam Optimization Algorithm for Deep Learning

The choice of optimization algorithm for your deep learning model can mean the difference between good results in minutes, hours, and days. The Adam optimization algorithm is an extension to stochastic gradient descent that has recently seen broader adoption for deep learning applications in computer vision and natural language processing. In this post, you will […]

Continue Reading