Stochastic gradient descent is the dominant method used to train deep learning models.

There are three main variants of gradient descent and it can be confusing which one to use.

In this post, you will discover the one type of gradient descent you should use in general and how to configure it.

After completing this post, you will know:

- What gradient descent is and how it works from a high level.
- What batch, stochastic, and mini-batch gradient descent are and the benefits and limitations of each method.
- That mini-batch gradient descent is the go-to method and how to configure it on your applications.

Let’s get started.

## Tutorial Overview

This tutorial is divided into 3 parts; they are:

- What is Gradient Descent?
- Contrasting the 3 Types of Gradient Descent
- How to Configure Mini-Batch Gradient Descent

## What is Gradient Descent?

Gradient descent is an optimization algorithm often used for finding the weights or coefficients of machine learning algorithms, such as artificial neural networks and logistic regression.

It works by having the model make predictions on training data and using the error on the predictions to update the model in such a way as to reduce the error.

The goal of the algorithm is to find model parameters (e.g. coefficients or weights) that minimize the error of the model on the training dataset. It does this by making changes to the model that move it along a gradient or slope of errors down toward a minimum error value. This gives the algorithm its name of “gradient descent.”

The pseudocode sketch below summarizes the gradient descent algorithm:

1 2 3 4 5 6 7 8 9 |
model = initialization(...) n_epochs = ... train_data = ... for i in n_epochs: train_data = shuffle(train_data) X, y = split(train_data) predictions = predict(X, model) error = calculate_error(y, predictions) model = update_model(model, error) |

For more information see the posts:

- Gradient Descent For Machine Learning
- How to Implement Linear Regression with Stochastic Gradient Descent from Scratch with Python

## Contrasting the 3 Types of Gradient Descent

Gradient descent can vary in terms of the number of training patterns used to calculate error; that is in turn used to update the model.

The number of patterns used to calculate the error includes how stable the gradient is that is used to update the model. We will see that there is a tension in gradient descent configurations of computational efficiency and the fidelity of the error gradient.

The three main flavors of gradient descent are batch, stochastic, and mini-batch.

Let’s take a closer look at each.

### What is Stochastic Gradient Descent?

Stochastic gradient descent, often abbreviated SGD, is a variation of the gradient descent algorithm that calculates the error and updates the model for each example in the training dataset.

The update of the model for each training example means that stochastic gradient descent is often called an online machine learning algorithm.

#### Upsides

- The frequent updates immediately give an insight into the performance of the model and the rate of improvement.
- This variant of gradient descent may be the simplest to understand and implement, especially for beginners.
- The increased model update frequency can result in faster learning on some problems.
- The noisy update process can allow the model to avoid local minima (e.g. premature convergence).

#### Downsides

- Updating the model so frequently is more computationally expensive than other configurations of gradient descent, taking significantly longer to train models on large datasets.
- The frequent updates can result in a noisy gradient signal, which may cause the model parameters and in turn the model error to jump around (have a higher variance over training epochs).
- The noisy learning process down the error gradient can also make it hard for the algorithm to settle on an error minimum for the model.

### What is Batch Gradient Descent?

Batch gradient descent is a variation of the gradient descent algorithm that calculates the error for each example in the training dataset, but only updates the model after all training examples have been evaluated.

One cycle through the entire training dataset is called a training epoch. Therefore, it is often said that batch gradient descent performs model updates at the end of each training epoch.

#### Upsides

- Fewer updates to the model means this variant of gradient descent is more computationally efficient than stochastic gradient descent.
- The decreased update frequency results in a more stable error gradient and may result in a more stable convergence on some problems.
- The separation of the calculation of prediction errors and the model update lends the algorithm to parallel processing based implementations.

#### Downsides

- The more stable error gradient may result in premature convergence of the model to a less optimal set of parameters.
- The updates at the end of the training epoch require the additional complexity of accumulating prediction errors across all training examples.
- Commonly, batch gradient descent is implemented in such a way that it requires the entire training dataset in memory and available to the algorithm.
- Model updates, and in turn training speed, may become very slow for large datasets.

### What is Mini-Batch Gradient Descent?

Mini-batch gradient descent is a variation of the gradient descent algorithm that splits the training dataset into small batches that are used to calculate model error and update model coefficients.

Implementations may choose to sum the gradient over the mini-batch or take the average of the gradient which further reduces the variance of the gradient.

Mini-batch gradient descent seeks to find a balance between the robustness of stochastic gradient descent and the efficiency of batch gradient descent. It is the most common implementation of gradient descent used in the field of deep learning.

#### Upsides

- The model update frequency is higher than batch gradient descent which allows for a more robust convergence, avoiding local minima.
- The batched updates provide a computationally more efficient process than stochastic gradient descent.
- The batching allows both the efficiency of not having all training data in memory and algorithm implementations.

#### Downsides

- Mini-batch requires the configuration of an additional “mini-batch size” hyperparameter for the learning algorithm.
- Error information must be accumulated across mini-batches of training examples like batch gradient descent.

## How to Configure Mini-Batch Gradient Descent

Mini-batch gradient descent is the recommended variant of gradient descent for most applications, especially in deep learning.

Mini-batch sizes, commonly called “batch sizes” for brevity, are often tuned to an aspect of the computational architecture on which the implementation is being executed. Such as a power of two that fits the memory requirements of the GPU or CPU hardware like 32, 64, 128, 256, and so on.

Batch size is a slider on the learning process.

- Small values give a learning process that converges quickly at the cost of noise in the training process.
- Large values give a learning process that converges slowly with accurate estimates of the error gradient.

**Tip 1: A good default for batch size might be 32.**

… [batch size] is typically chosen between 1 and a few hundreds, e.g. [batch size] = 32 is a good default value, with values above 10 taking advantage of the speedup of matrix-matrix products over matrix-vector products.

— Practical recommendations for gradient-based training of deep architectures, 2012

**Tip 2: It is a good idea to review learning curves of model validation error against training time with different batch sizes when tuning the batch size.**

… it can be optimized separately of the other hyperparameters, by comparing training curves (training and validation error vs amount of training time), after the other hyper-parameters (except learning rate) have been selected.

**Tip 3: Tune batch size and learning rate after tuning all other hyperparameters.**

… [batch size] and [learning rate] may slightly interact with other hyper-parameters so both should be re-optimized at the end. Once [batch size] is selected, it can generally be fixed while the other hyper-parameters can be further optimized (except for a momentum hyper-parameter, if one is used).

## Further Reading

This section provides more resources on the topic if you are looking go deeper.

### Related Posts

- Gradient Descent for Machine Learning
- How to Implement Linear Regression with Stochastic Gradient Descent from Scratch with Python

### Additional Reading

- Stochastic gradient descent on Wikipedia
- Online machine learning on Wikipedia
- An overview of gradient descent optimization algorithms
- Practical recommendations for gradient-based training of deep architectures, 2012
- Efficient Mini-batch Training for Stochastic Optimization, 2014
- In deep learning, why don’t we use the whole training set to compute the gradient? on Quora
- Optimization Methods for Large-Scale Machine Learning, 2016

## Summary

In this post, you discovered the gradient descent algorithm and the version that you should use in practice.

Specifically, you learned:

- What gradient descent is and how it works from a high level.
- What batch, stochastic, and mini-batch gradient descent are and the benefits and limitations of each method.
- That mini-batch gradient descent is the go-to method and how to configure it on your applications.

Do you have any questions?

Ask your questions in the comments below and I will do my best to answer.

In mini-batch part, “The model update frequency is lower than batch gradient descent which allows for a more robust convergence, avoiding local minima.”

I think this is lower than SGD, rather than BGD, am I wrong?

Typo, I meant “higher”. Fixed, thanks.

Wait, so won’t that make Adam a mini-batch gradient descent algorithm, instead of stochastic gradient descent? (At least, in Keras’ implementation)

Since in Keras, when using Adam, you can still set batch size, rather than have it update weights per each data point

The idea of batches in SGD and the Adam optimizations of SGD are orthogonal.

You can use batches with or without Adam.

More on Adam here:

http://machinelearningmastery.com/adam-optimization-algorithm-for-deep-learning/

Oh ok, and also isn’t SGD called so because Gradient Descent is a greedy algorithm and searches for a minima along a slope which can lead to it getting stuck with local minima and to prevent that, Stochastic Gradient Descent uses various random iteration and then a proximates the global minima from all slopes, hence the “stochastic”?

Yes, right on, it adds noise to the process which allows the process to escape local optima in search of something better.

Suppose my training data size is 1000 and batch size I selected is 128.

So, I would like to know how algorithm deals with last training set which is less than batch size?

In this case 7 weights update will be done till algorithm reach 896 training samples.

Now what happens for rest of 104 training samples.

Will it ignore the last training set or it will use 24 samples from next epoch?

It uses a smaller batch size for the last batch. The samples are still used.

Thanks for the clarification.

These quotes are from this article and the linked articles. They are subtly different, are they all true?

“Batch gradient descent is the most common form of gradient descent described in machine learning.”

“The most common optimization algorithm used in machine learning is stochastic gradient descent.”

“Mini-batch gradient descent is the recommended variant of gradient descent for most applications, especially in deep learning.”

Yes, batch/mini-batch are types of stochastic gradient descent.

Thanks for the post! It’s a very elegant summry.

However, I don’t really understand this point for the benefits of stochastic gradient descent:

– The noisy update process can allow the model to avoid local minima (e.g. premature convergence).

Can I ask why is this the case?

Wonderful question.

Because the weights will bounce around the solution space more and may bounce out of local minima given the larger variance in the updates to the weights.

Does that help?

Great summary! Concerning mini batch – you said “Implementations may choose to sum the gradient…”

Suppose there are 1000 training samples, and a mini batch size of 42. So 23 mini batches of size 42, and 1 mini batch of size of 34.

if the weights are updated based only on the sum of the gradient, would that last mini batch with a different size cause problems since the number of summations isn’t the same as the other mini batches?

Good question, in general it is better to have mini batches that have the same number of samples. In practice the difference does not seem to matter much.

Shouldn’t

`predict(X, train_data)`

in your pseudocode be`predict(X, model)`

?Yes, fixed. Thanks.

Hi Jason,great post.

Could you please explain the meaning of “sum the gradient over the mini-batch or take the average of the gradient”. What we actually summing over the mini-batch?

When you say “take the average of the gradient” I presume you mean taking the average of the parameters calculated for all mini-batches.

Also, is this post is an excerpt from your book?

Thanks

The estimate of the error gradient.

You can learn more about how the error gradient is calculated with a code example here:

https://machinelearningmastery.com/implement-backpropagation-algorithm-scratch-python/

Also, why in mini-batch gradient descent we simply use the output from one mini-batch processing as the input into the next mini-batch

Sorry Igor, I don’t follow. Perhaps you can rephrase your question?

Hi, Great post!

Could you please further explain the parameter updating in mini-batch?

Here is my understanding: we use one mini-batch to get the gradient and then use this gradient to update weights. For next mini-batch, we repeat above procedure and update the weights based on previous one. I am not sure my understanding is right.

Thanks.

Sounds correct.