It can be challenging to develop a neural network predictive model for a new dataset. One approach is to first inspect the dataset and develop ideas for what models might work, then explore the learning dynamics of simple models on the dataset, then finally develop and tune a model for the dataset with a robust […]
How to Use Optimization Algorithms to Manually Fit Regression Models
Regression models are fit on training data using linear regression and local search optimization algorithms. Models like linear regression and logistic regression are trained by least squares optimization, and this is the most efficient approach to finding coefficients that minimize error for these models. Nevertheless, it is possible to use alternate optimization algorithms to fit […]
Function Optimization With SciPy
Optimization involves finding the inputs to an objective function that result in the minimum or maximum output of the function. The open-source Python library for scientific computing called SciPy provides a suite of optimization algorithms. Many of the algorithms are used as a building block in other algorithms, most notably machine learning algorithms in the […]
Gradient Descent With Momentum from Scratch
Gradient descent is an optimization algorithm that follows the negative gradient of an objective function in order to locate the minimum of the function. A problem with gradient descent is that it can bounce around the search space on optimization problems that have large amounts of curvature or noisy gradients, and it can get stuck […]
Weight Initialization for Deep Learning Neural Networks
Weight initialization is an important design choice when developing deep learning neural network models. Historically, weight initialization involved using small random numbers, although over the last decade, more specific heuristics have been developed that use information, such as the type of activation function that is being used and the number of inputs to the node. […]
Difference Between Backpropagation and Stochastic Gradient Descent
There is a lot of confusion for beginners around what algorithm is used to train deep learning neural network models. It is common to hear neural networks learn using the “back-propagation of error” algorithm or “stochastic gradient descent.” Sometimes, either of these algorithms is used as a shorthand for how a neural net is fit […]
Local Optimization Versus Global Optimization
Optimization refers to finding the set of inputs to an objective function that results in the maximum or minimum output from the objective function. It is common to describe optimization problems in terms of local vs. global optimization. Similarly, it is also common to describe optimization algorithms or search algorithms in terms of local vs. […]
How to Develop a Neural Net for Predicting Car Insurance Payout
Developing a neural network predictive model for a new dataset can be challenging. One approach is to first inspect the dataset and develop ideas for what models might work, then explore the learning dynamics of simple models on the dataset, then finally develop and tune a model for the dataset with a robust test harness. […]
How to Use Nelder-Mead Optimization in Python
The Nelder-Mead optimization algorithm is a widely used approach for non-differentiable objective functions. As such, it is generally referred to as a pattern search algorithm and is used as a local or global search procedure, challenging nonlinear and potentially noisy and multimodal function optimization problems. In this tutorial, you will discover the Nelder-Mead optimization algorithm. […]
How to Get Started With Recommender Systems
Recommender systems may be the most common type of predictive model that the average person may encounter. They provide the basis for recommendations on services such as Amazon, Spotify, and Youtube. Recommender systems are a huge daunting topic if you’re just getting started. There is a myriad of data preparation techniques, algorithms, and model evaluation […]