[New Book] Click to get The Beginner's Guide to Data Science!
Use the offer code 20offearlybird to get 20% off. Hurry, sale ends soon!

Gradient Descent With Nesterov Momentum From Scratch

Gradient descent is an optimization algorithm that follows the negative gradient of an objective function in order to locate the minimum of the function.

A limitation of gradient descent is that it can get stuck in flat areas or bounce around if the objective function returns noisy gradients. Momentum is an approach that accelerates the progress of the search to skim across flat areas and smooth out bouncy gradients.

In some cases, the acceleration of momentum can cause the search to miss or overshoot the minima at the bottom of basins or valleys. Nesterov momentum is an extension of momentum that involves calculating the decaying moving average of the gradients of projected positions in the search space rather than the actual positions themselves.

This has the effect of harnessing the accelerating benefits of momentum whilst allowing the search to slow down when approaching the optima and reduce the likelihood of missing or overshooting it.

In this tutorial, you will discover how to develop the Gradient Descent optimization algorithm with Nesterov Momentum from scratch.

After completing this tutorial, you will know:

  • Gradient descent is an optimization algorithm that uses the gradient of the objective function to navigate the search space.
  • The convergence of gradient descent optimization algorithm can be accelerated by extending the algorithm and adding Nesterov Momentum.
  • How to implement the Nesterov Momentum optimization algorithm from scratch and apply it to an objective function and evaluate the results.

Kick-start your project with my new book Optimization for Machine Learning, including step-by-step tutorials and the Python source code files for all examples.

Let’s get started.

Gradient Descent With Nesterov Momentum From Scratch

Gradient Descent With Nesterov Momentum From Scratch
Photo by Bonnie Moreland, some rights reserved.

Tutorial Overview

This tutorial is divided into three parts; they are:

  1. Gradient Descent
  2. Nesterov Momentum
  3. Gradient Descent With Nesterov Momentum
    1. Two-Dimensional Test Problem
    2. Gradient Descent Optimization With Nesterov Momentum
    3. Visualization of Nesterov Momentum

Gradient Descent

Gradient descent is an optimization algorithm.

It is technically referred to as a first-order optimization algorithm as it explicitly makes use of the first order derivative of the target objective function.

First-order methods rely on gradient information to help direct the search for a minimum …

— Page 69, Algorithms for Optimization, 2019.

The first order derivative, or simply the “derivative,” is the rate of change or slope of the target function at a specific point, e.g. for a specific input.

If the target function takes multiple input variables, it is referred to as a multivariate function and the input variables can be thought of as a vector. In turn, the derivative of a multivariate target function may also be taken as a vector and is referred to generally as the “gradient.”

  • Gradient: First order derivative for a multivariate objective function.

The derivative or the gradient points in the direction of the steepest ascent of the target function for a specific input.

Gradient descent refers to a minimization optimization algorithm that follows the negative of the gradient downhill of the target function to locate the minimum of the function.

The gradient descent algorithm requires a target function that is being optimized and the derivative function for the objective function. The target function f() returns a score for a given set of inputs, and the derivative function f'() gives the derivative of the target function for a given set of inputs.

The gradient descent algorithm requires a starting point (x) in the problem, such as a randomly selected point in the input space.

The derivative is then calculated and a step is taken in the input space that is expected to result in a downhill movement in the target function, assuming we are minimizing the target function.

A downhill movement is made by first calculating how far to move in the input space, calculated as the steps size (called alpha or the learning rate) multiplied by the gradient. This is then subtracted from the current point, ensuring we move against the gradient, or down the target function.

  • x(t+1) = x(t) – step_size * f'(x(t))

The steeper the objective function at a given point, the larger the magnitude of the gradient, and in turn, the larger the step taken in the search space. The size of the step taken is scaled using a step size hyperparameter.

  • Step Size (alpha): Hyperparameter that controls how far to move in the search space against the gradient each iteration of the algorithm.

If the step size is too small, the movement in the search space will be small, and the search will take a long time. If the step size is too large, the search may bounce around the search space and skip over the optima.

Now that we are familiar with the gradient descent optimization algorithm, let’s take a look at the Nesterov momentum.

Want to Get Started With Optimization Algorithms?

Take my free 7-day email crash course now (with sample code).

Click to sign-up and also get a free PDF Ebook version of the course.

Nesterov Momentum

Nesterov Momentum is an extension to the gradient descent optimization algorithm.

The approach was described by (and named for) Yurii Nesterov in his 1983 paper titled “A Method For Solving The Convex Programming Problem With Convergence Rate O(1/k^2).”

Ilya Sutskever, et al. are responsible for popularizing the application of Nesterov Momentum in the training of neural networks with stochastic gradient descent described in their 2013 paper “On The Importance Of Initialization And Momentum In Deep Learning.” They referred to the approach as “Nesterov’s Accelerated Gradient,” or NAG for short.

Nesterov Momentum is just like more traditional momentum except the update is performed using the partial derivative of the projected update rather than the derivative current variable value.

While NAG is not typically thought of as a type of momentum, it indeed turns out to be closely related to classical momentum, differing only in the precise update of the velocity vector …

On The Importance Of Initialization And Momentum In Deep Learning, 2013.

Traditional momentum involves maintaining an additional variable that represents the last update performed to the variable, an exponentially decaying moving average of past gradients.

The momentum algorithm accumulates an exponentially decaying moving average of past gradients and continues to move in their direction.

— Page 296, Deep Learning, 2016.

This last update or last change to the variable is then added to the variable scaled by a “momentum” hyperparameter that controls how much of the last change to add, e.g. 0.9 for 90%.

It is easier to think about this update in terms of two steps, e.g calculate the change in the variable using the partial derivative then calculate the new value for the variable.

  • change(t+1) = (momentum * change(t)) – (step_size * f'(x(t)))
  • x(t+1) = x(t) + change(t+1)

We can think of momentum in terms of a ball rolling downhill that will accelerate and continue to go in the same direction even in the presence of small hills.

Momentum can be interpreted as a ball rolling down a nearly horizontal incline. The ball naturally gathers momentum as gravity causes it to accelerate, just as the gradient causes momentum to accumulate in this descent method.

— Page 75, Algorithms for Optimization, 2019.

A problem with momentum is that acceleration can sometimes cause the search to overshoot the minima at the bottom of a basin or valley floor.

Nesterov Momentum can be thought of as a modification to momentum to overcome this problem of overshooting the minima.

It involves first calculating the projected position of the variable using the change from the last iteration and using the derivative of the projected position in the calculation of the new position for the variable.

Calculating the gradient of the projected position acts like a correction factor for the acceleration that has been accumulated.

With Nesterov momentum the gradient is evaluated after the current velocity is applied. Thus one can interpret Nesterov momentum as attempting to add a correction factor to the standard method of momentum.

— Page 300, Deep Learning, 2016.

Nesterov Momentum is easy to think about this in terms of the four steps:

  • 1. Project the position of the solution.
  • 2. Calculate the gradient of the projection.
  • 3. Calculate the change in the variable using the partial derivative.
  • 4. Update the variable.

Let’s go through these steps in more detail.

First, the projected position of the entire solution is calculated using the change calculated in the last iteration of the algorithm.

  • projection(t+1) = x(t) + (momentum * change(t))

We can then calculate the gradient for this new position.

  • gradient(t+1) = f'(projection(t+1))

Now we can calculate the new position of each variable using the gradient of the projection, first by calculating the change in each variable.

  • change(t+1) = (momentum * change(t)) – (step_size * gradient(t+1))

And finally, calculating the new value for each variable using the calculated change.

  • x(t+1) = x(t) + change(t+1)

In the field of convex optimization more generally, Nesterov Momentum is known to improve the rate of convergence of the optimization algorithm (e.g. reduce the number of iterations required to find the solution).

Like momentum, NAG is a first-order optimization method with better convergence rate guarantee than gradient descent in certain situations.

On The Importance Of Initialization And Momentum In Deep Learning, 2013.

Although the technique is effective in training neural networks, it may not have the same general effect of accelerating convergence.

Unfortunately, in the stochastic gradient case, Nesterov momentum does not improve the rate of convergence.

— Page 300, Deep Learning, 2016.

Now that we are familiar with the Nesterov Momentum algorithm, let’s explore how we might implement it and evaluate its performance.

Gradient Descent With Nesterov Momentum

In this section, we will explore how to implement the gradient descent optimization algorithm with Nesterov Momentum.

Two-Dimensional Test Problem

First, let’s define an optimization function.

We will use a simple two-dimensional function that squares the input of each dimension and define the range of valid inputs from -1.0 to 1.0.

The objective() function below implements this function.

We can create a three-dimensional plot of the dataset to get a feeling for the curvature of the response surface.

The complete example of plotting the objective function is listed below.

Running the example creates a three-dimensional surface plot of the objective function.

We can see the familiar bowl shape with the global minima at f(0, 0) = 0.

Three-Dimensional Plot of the Test Objective Function

Three-Dimensional Plot of the Test Objective Function

We can also create a two-dimensional plot of the function. This will be helpful later when we want to plot the progress of the search.

The example below creates a contour plot of the objective function.

Running the example creates a two-dimensional contour plot of the objective function.

We can see the bowl shape compressed to contours shown with a color gradient. We will use this plot to plot the specific points explored during the progress of the search.

Two-Dimensional Contour Plot of the Test Objective Function

Two-Dimensional Contour Plot of the Test Objective Function

Now that we have a test objective function, let’s look at how we might implement the Nesterov Momentum optimization algorithm.

Gradient Descent Optimization With Nesterov Momentum

We can apply the gradient descent with Nesterov Momentum to the test problem.

First, we need a function that calculates the derivative for this function.

The derivative of x^2 is x * 2 in each dimension and the derivative() function implements this below.

Next, we can implement gradient descent optimization.

First, we can select a random point in the bounds of the problem as a starting point for the search.

This assumes we have an array that defines the bounds of the search with one row for each dimension and the first column defines the minimum and the second column defines the maximum of the dimension.

Next, we need to calculate the projected point from the current position and calculate its derivative.

We can then create the new solution, one variable at a time.

First, the change in the variable is calculated using the partial derivative and learning rate with the momentum from the last change in the variable. This change is stored for the next iteration of the algorithm. Then the change is used to calculate the new value for the variable.

This is repeated for each variable for the objective function, then repeated for each iteration of the algorithm.

This new solution can then be evaluated using the objective() function and the performance of the search can be reported.

And that’s it.

We can tie all of this together into a function named nesterov() that takes the names of the objective function and the derivative function, an array with the bounds of the domain and hyperparameter values for the total number of algorithm iterations, the learning rate, and the momentum, and returns the final solution and its evaluation.

This complete function is listed below.

Note, we have intentionally used lists and imperative coding style instead of vectorized operations for readability. Feel free to adapt the implementation to a vectorization implementation with NumPy arrays for better performance.

We can then define our hyperparameters and call the nesterov() function to optimize our test objective function.

In this case, we will use 30 iterations of the algorithm with a learning rate of 0.1 and momentum of 0.3. These hyperparameter values were found after a little trial and error.

Tying all of this together, the complete example of gradient descent optimization with Nesterov Momentum is listed below.

Running the example applies the optimization algorithm with Nesterov Momentum to our test problem and reports performance of the search for each iteration of the algorithm.

Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.

In this case, we can see that a near optimal solution was found after perhaps 15 iterations of the search, with input values near 0.0 and 0.0, evaluating to 0.0.

Visualization of Nesterov Momentum

We can plot the progress of the Nesterov Momentum search on a contour plot of the domain.

This can provide an intuition for the progress of the search over the iterations of the algorithm.

We must update the nesterov() function to maintain a list of all solutions found during the search, then return this list at the end of the search.

The updated version of the function with these changes is listed below.

We can then execute the search as before, and this time retrieve the list of solutions instead of the best final solution.

We can then create a contour plot of the objective function, as before.

Finally, we can plot each solution found during the search as a white dot connected by a line.

Tying this all together, the complete example of performing the Nesterov Momentum optimization on the test problem and plotting the results on a contour plot is listed below.

Running the example performs the search as before, except in this case, the contour plot of the objective function is created.

In this case, we can see that a white dot is shown for each solution found during the search, starting above the optima and progressively getting closer to the optima at the center of the plot.

Contour Plot of the Test Objective Function With Nesterov Momentum Search Results Shown

Contour Plot of the Test Objective Function With Nesterov Momentum Search Results Shown

Further Reading

This section provides more resources on the topic if you are looking to go deeper.

Papers

Books

APIs

Articles

Summary

In this tutorial, you discovered how to develop the gradient descent optimization with Nesterov Momentum from scratch.

Specifically, you learned:

  • Gradient descent is an optimization algorithm that uses the gradient of the objective function to navigate the search space.
  • The convergence of gradient descent optimization algorithm can be accelerated by extending the algorithm and adding Nesterov Momentum.
  • How to implement the Nesterov Momentum optimization algorithm from scratch and apply it to an objective function and evaluate the results.

Do you have any questions?
Ask your questions in the comments below and I will do my best to answer.

Get a Handle on Modern Optimization Algorithms!

Optimization for Maching Learning

Develop Your Understanding of Optimization

...with just a few lines of python code

Discover how in my new Ebook:
Optimization for Machine Learning

It provides self-study tutorials with full working code on:
Gradient Descent, Genetic Algorithms, Hill Climbing, Curve Fitting, RMSProp, Adam, and much more...

Bring Modern Optimization Algorithms to
Your Machine Learning Projects


See What's Inside

9 Responses to Gradient Descent With Nesterov Momentum From Scratch

  1. Avatar
    Sayeed March 19, 2021 at 2:39 pm #

    Hi Jason, thanks for the great tutorial! I had a query, in the case of NNs, where we can’t define the derivative as a direct func, so will get it through the .grad() using a computational graph, can you show an example of using this algorithm? Also how to define the bounds in that NN case with millions of params?

    • Avatar
      Jason Brownlee March 20, 2021 at 5:17 am #

      Great suggestion, thanks. Perhaps I can expand the tutorial in the future.

  2. Avatar
    Rodrigo March 21, 2021 at 9:31 am #

    There are bugs, firsta randon, needs to be imported with numpy, second momentun is not declared.

    • Avatar
      Jason Brownlee March 22, 2021 at 5:26 am #

      Sorry to hear that you’re having trouble.

      Random is imported from numpy and momentum is defined in the complete example at the end.

      Perhaps you skipped some lines?

  3. Avatar
    Anthony The Koala April 5, 2021 at 3:12 am #

    Dear Dr Jason,
    Three points:
    (1) I have seen that Tensorflow and pytorch have implementations of the “Nesterov Momentum” algorithm, eg https://www.tensorflow.org/api_docs/python/tf/keras/optimizers/SGD .
    In Tensopflow’s implementation, the objective function is not a parameter.

    Rather, the optimization algorithm is a parameter of the model rather than the function is a parameter of the optimization as in your example.

    Reference: http://tflearn.org/optimizers/

    (2) Reason for (1) is that one can compare a particular algorithm’s timing using the gradient descent algorithm with the “Nestorov Momentum” using the timing module.

    Source of inspiration: First answer in https://stackoverflow.com/questions/7370801/how-to-measure-elapsed-time-in-python

    (3) Another reason for (1) and (2) is that you can see whether a faster gradient descent such as “Nesterov Momentum”, though faster produces effective cross validation scores and learning curves.

    Thank you,
    Anthony of Sydney

    • Avatar
      Jason Brownlee April 5, 2021 at 6:17 am #

      I would recommend selecting an optimizer based on prior knowledge of the objective function, prior knowledge of how to use the optimize effectively, or performance achieved in the final result. Wall clock time might not be an effective approach.

      • Avatar
        Anthony The Koala April 7, 2021 at 5:01 pm #

        Dear Dr Jason,
        Dear Dr Jason,
        Thank you for your reply.
        I have one more question on your ‘from scratch’ implementation of the “Nestorov Momentum” and the implementation of the “Nestorov Momentum” in the Keras and Pytorch.
        Eg:

        tf.keras.optimizers.SGD(
        learning_rate=0.01, momentum=0.0, nesterov=True, name=”SGD”, **kwargs
        )

        QUESTION please: Will Keras’s implementation of Nesterov perform the same as your ‘from scratch’ implementation of the Nesterov. OR is Keras’s implementation of Nesterov the full implementation of your ‘from scratch’ version.

        Thank you,
        Anthony of Sydney

        • Avatar
          Jason Brownlee April 8, 2021 at 5:06 am #

          No, a library implementation may make use of tricks to make it the implementation more efficient and checks to ensure the execution is numerically stable.

          This is why I recommend only coding algorithms from scratch as a learning exercise.

          • Avatar
            Anthony The Koala April 8, 2021 at 5:38 am #

            Dear Dr Jason,
            Thank you,
            Anthony of Sydney

Leave a Reply