Archive | Better Deep Learning

Line Plot of Model Accuracy on Train and Test Datasets With Different Weight Regularization Parameters

How to Use Weight Decay to Reduce Overfitting of Deep Learning Neural Network Models (with Keras)

Weight regularization provides an approach to reduce the overfitting of a deep learning neural network model on the training data and improve the performance of the model on new data, such as the holdout test set. There are multiple types of weight regularization, such as L1 and L2 vector norms, and each requires a hyperparameter […]

Continue Reading 8
How to Configure the Number of Layers and Nodes in a Neural Network

How to Configure the Number of Layers and Nodes in a Neural Network

Artificial neural networks have two main hyperparameters that control the architecture or topology of the network: the number of layers and the number of nodes in each hidden layer. You must specify values for these parameters when configuring your network. The most reliable way to configure these hyperparameters for your specific predictive modeling problem is […]

Continue Reading 13
Comparison of Adam to Other Optimization Algorithms Training a Multilayer Perceptron

Gentle Introduction to the Adam Optimization Algorithm for Deep Learning

The choice of optimization algorithm for your deep learning model can mean the difference between good results in minutes, hours, and days. The Adam optimization algorithm is an extension to stochastic gradient descent that has recently seen broader adoption for deep learning applications in computer vision and natural language processing. In this post, you will […]

Continue Reading 50