Archive | Optimization

Line Plot of Objective Function With Search Starting Point and Optima

Line Search Optimization With Python

The line search is an optimization algorithm that can be used for objective functions with one or more variables. It provides a way to use a univariate optimization algorithm, like a bisection search on a multivariate objective function, by using the search to locate the optimal step size in each dimension from a known point […]

Continue Reading
Contour Plot of the Test Objective Function With RMSProp Search Results Shown

Gradient Descent With RMSProp from Scratch

Gradient descent is an optimization algorithm that follows the negative gradient of an objective function in order to locate the minimum of the function. A limitation of gradient descent is that it uses the same step size (learning rate) for each input variable. AdaGrad, for short, is an extension of the gradient descent optimization algorithm […]

Continue Reading
3D Surface Plot of the Ackley Multimodal Function

Dual Annealing Optimization With Python

Dual Annealing is a stochastic global optimization algorithm. It is an implementation of the generalized simulated annealing algorithm, an extension of simulated annealing. In addition, it is paired with a local search algorithm that is automatically performed at the end of the simulated annealing procedure. This combination of effective global and local search procedures provides […]

Continue Reading
A Gentle Introduction to the BFGS Optimization Algorithm

A Gentle Introduction to the BFGS Optimization Algorithm

The Broyden, Fletcher, Goldfarb, and Shanno, or BFGS Algorithm, is a local search optimization algorithm. It is a type of second-order optimization algorithm, meaning that it makes use of the second-order derivative of an objective function and belongs to a class of algorithms referred to as Quasi-Newton methods that approximate the second derivative (called the […]

Continue Reading
What Is a Gradient in Machine Learning?

What Is a Gradient in Machine Learning?

Gradient is a commonly used term in optimization and machine learning. For example, deep learning neural networks are fit using stochastic gradient descent, and many standard optimization algorithms used to fit machine learning algorithms use gradient information. In order to understand what a gradient is, you need to understand what a derivative is from the […]

Continue Reading
Contour Plot of the Test Objective Function With Adadelta Search Results Shown

Gradient Descent With Adadelta from Scratch

Gradient descent is an optimization algorithm that follows the negative gradient of an objective function in order to locate the minimum of the function. A limitation of gradient descent is that it uses the same step size (learning rate) for each input variable. AdaGradn and RMSProp are extensions to gradient descent that add a self-adaptive […]

Continue Reading
Iterated Local Search From Scratch in Python

Iterated Local Search From Scratch in Python

Iterated Local Search is a stochastic global optimization algorithm. It involves the repeated application of a local search algorithm to modified versions of a good solution found previously. In this way, it is like a clever version of the stochastic hill climbing with random restarts algorithm. The intuition behind the algorithm is that random restarts […]

Continue Reading