Last Updated on July 27, 2022
We usually use TensorFlow to build a neural network. However, TensorFlow is not limited to this. Behind the scenes, TensorFlow is a tensor library with automatic differentiation capability. Hence you can easily use it to solve a numerical optimization problem with gradient descent. In this post, you will learn how TensorFlow’s automatic differentiation engine, autograd, works.
After finishing this tutorial, you will learn:
- What is autograd in TensorFlow
- How to make use of autograd and an optimizer to solve an optimization problem
Let’s get started.

Using autograd in TensorFlow to solve a regression problem
Photo by Lukas Tennie. Some rights reserved.
Overview
This tutorial is in three parts; they are:
- Autograd in TensorFlow
- Using Autograd for Polynomial Regression
- Using Autograd to Solve a Math Puzzle
Autograd in TensorFlow
In TensorFlow 2.x, you can define variables and constants as TensorFlow objects and build an expression with them. The expression is essentially a function of the variables. Hence you may derive its derivative function, i.e., the differentiation or the gradient. This feature is one of the many fundamental features in TensorFlow. The deep learning model will make use of this in the training loop.
It is easier to explain autograd with an example. In TensorFlow 2.x, you can create a constant matrix as follows:
1 2 3 4 5 6 |
import tensorflow as tf x = tf.constant([1, 2, 3]) print(x) print(x.shape) print(x.dtype) |
The above prints:
1 2 3 |
tf.Tensor([1 2 3], shape=(3,), dtype=int32) (3,) <dtype: 'int32'> |
This creates an integer vector (in the form of a Tensor object). This vector can work like a NumPy vector in most cases. For example, you can do x+x
or 2*x
, and the result is just what you would expect. TensorFlow comes with many functions for array manipulation that match NumPy, such as tf.transpose
or tf.concat
.
Creating variables in TensorFlow is just the same, for example:
1 2 3 4 5 6 |
import tensorflow as tf x = tf.Variable([1, 2, 3]) print(x) print(x.shape) print(x.dtype) |
This will print:
1 2 3 |
<tf.Variable 'Variable:0' shape=(3,) dtype=int32, numpy=array([1, 2, 3], dtype=int32)> (3,) <dtype: 'int32'> |
The operations (such as x+x
and 2*x
) that you can apply to Tensor objects can also be applied to variables. The only difference between variables and constants is the former allows the value to change while the latter is immutable. This distinction is important when you run a gradient tape as follows:
1 2 3 4 5 6 7 8 9 |
import tensorflow as tf x = tf.Variable(3.6) with tf.GradientTape() as tape: y = x*x dy = tape.gradient(y, x) print(dy) |
This prints:
1 |
tf.Tensor(7.2, shape=(), dtype=float32) |
What it does is the following: This defined a variable x
(with value 3.6) and then created a gradient tape. While the gradient tape is working, it computes y=x*x
or $y=x^2$. The gradient tape monitored how the variables are manipulated. Afterward, you asked the gradient tape to find the derivative $\dfrac{dy}{dx}$. You know $y=x^2$ means $y’=2x$. Hence the output would give you a value of $3.6\times 2=7.2$.
Using Autograd for Polynomial Regression
How is this feature in TensorFlow helpful? Let’s consider a case where you have a polynomial in the form of $y=f(x)$, and you are given several $(x,y)$ samples. How can you recover the polynomial $f(x)$? One way is to assume a random coefficient for the polynomial and feed in the samples $(x,y)$. If the polynomial is found, you should see the value of $y$ matches $f(x)$. The closer they are, the closer your estimate is to the correct polynomial.
This is indeed a numerical optimization problem such that you want to minimize the difference between $y$ and $f(x)$. You can use gradient descent to solve it.
Let’s consider an example. You can build a polynomial $f(x)=x^2 + 2x + 3$ in NumPy as follows:
1 2 3 4 |
import numpy as np polynomial = np.poly1d([1, 2, 3]) print(polynomial) |
This prints:
1 2 |
2 1 x + 2 x + 3 |
You may use the polynomial as a function, such as:
1 |
print(polynomial(1.5)) |
And this prints 8.25
, for $(1.5)^2+2\times(1.5)+3 = 8.25$.
Now you can generate a number of samples from this function using NumPy:
1 2 3 4 5 |
N = 20 # number of samples # Generate random samples roughly between -10 to +10 X = np.random.randn(N,1) * 5 Y = polynomial(X) |
In the above, both X
and Y
are NumPy arrays of the shape (20,1)
, and they are related as $y=f(x)$ for the polynomial $f(x)$.
Now, assume you do not know what the polynomial is, except it is quadratic. And you want to recover the coefficients. Since a quadratic polynomial is in the form of $Ax^2+Bx+C$, you have three unknowns to find. You can find them using the gradient descent algorithm you implement or an existing gradient descent optimizer. The following demonstrates how it works:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
import tensorflow as tf # Assume samples X and Y are prepared elsewhere XX = np.hstack([X*X, X, np.ones_like(X)]) w = tf.Variable(tf.random.normal((3,1))) # the 3 coefficients x = tf.constant(XX, dtype=tf.float32) # input sample y = tf.constant(Y, dtype=tf.float32) # output sample optimizer = tf.keras.optimizers.Nadam(lr=0.01) print(w) for _ in range(1000): with tf.GradientTape() as tape: y_pred = x @ w mse = tf.reduce_sum(tf.square(y - y_pred)) grad = tape.gradient(mse, w) optimizer.apply_gradients([(grad, w)]) print(w) |
The print
statement before the for loop gives three random numbers, such as:
1 2 3 4 |
<tf.Variable 'Variable:0' shape=(3, 1) dtype=float32, numpy= array([[-2.1450958 ], [-1.1278448 ], [ 0.31241694]], dtype=float32)> |
But the one after the for loop gives you the coefficients very close to that in the polynomial:
1 2 3 4 |
<tf.Variable 'Variable:0' shape=(3, 1) dtype=float32, numpy= array([[1.0000628], [2.0002015], [2.996219 ]], dtype=float32)> |
What the above code does is the following: First, it creates a variable vector w
of 3 values, namely the coefficients $A,B,C$. Then you create an array of shape $(N,3)$, in which $N$ is the number of samples in the array X
. This array has 3 columns, which are the values of $x^2$, $x$, and 1, respectively. Such an array is built from the vector X
using the np.hstack()
function. Similarly, we build the TensorFlow constant y
from the NumPy array Y
.
Afterward, you use a for loop to run gradient descent in 1,000 iterations. In each iteration, you compute $x \times w$ in matrix form to find $Ax^2+Bx+C$ and assign it to the variable y_pred
. Then, compare y
and y_pred
and find the mean square error. Next, derive the gradient, i.e., the rate of change of the mean square error with respect to the coefficients w
. And based on this gradient, you use gradient descent to update w
.
In essence, the above code is to find the coefficients w
that minimizes the mean square error.
Putting everything together, the following is the complete code:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 |
import numpy as np import tensorflow as tf N = 20 # number of samples # Generate random samples roughly between -10 to +10 polynomial = np.poly1d([1, 2, 3]) X = np.random.randn(N,1) * 5 Y = polynomial(X) # Prepare input as an array of shape (N,3) XX = np.hstack([X*X, X, np.ones_like(X)]) # Prepare TensorFlow objects w = tf.Variable(tf.random.normal((3,1))) # the 3 coefficients x = tf.constant(XX, dtype=tf.float32) # input sample y = tf.constant(Y, dtype=tf.float32) # output sample optimizer = tf.keras.optimizers.Nadam(lr=0.01) print(w) # Run optimizer for _ in range(1000): with tf.GradientTape() as tape: y_pred = x @ w mse = tf.reduce_sum(tf.square(y - y_pred)) grad = tape.gradient(mse, w) optimizer.apply_gradients([(grad, w)]) print(w) |
Using autograd to Solve a Math Puzzle
In the above, 20 samples were used, which is more than enough to fit a quadratic equation. You may use gradient descent to solve some math puzzles as well. For example, the following problem:
1 2 3 4 5 |
[ A ] + [ B ] = 9 + - [ C ] - [ D ] = 1 = = 8 2 |
In other words, to find the values of $A,B,C,D$ such that:
$$\begin{aligned}
A + B &= 9 \\
C – D &= 1 \\
A + C &= 8 \\
B – D &= 2
\end{aligned}$$
This can also be solved using autograd, as follows:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 |
import tensorflow as tf import random A = tf.Variable(random.random()) B = tf.Variable(random.random()) C = tf.Variable(random.random()) D = tf.Variable(random.random()) # Gradient descent loop EPOCHS = 1000 optimizer = tf.keras.optimizers.Nadam(lr=0.1) for _ in range(EPOCHS): with tf.GradientTape() as tape: y1 = A + B - 9 y2 = C - D - 1 y3 = A + C - 8 y4 = B - D - 2 sqerr = y1*y1 + y2*y2 + y3*y3 + y4*y4 gradA, gradB, gradC, gradD = tape.gradient(sqerr, [A, B, C, D]) optimizer.apply_gradients([(gradA, A), (gradB, B), (gradC, C), (gradD, D)]) print(A) print(B) print(C) print(D) |
There can be multiple solutions to this problem. One solution is the following:
1 2 3 4 |
<tf.Variable 'Variable:0' shape=() dtype=float32, numpy=4.6777573> <tf.Variable 'Variable:0' shape=() dtype=float32, numpy=4.3222437> <tf.Variable 'Variable:0' shape=() dtype=float32, numpy=3.3222427> <tf.Variable 'Variable:0' shape=() dtype=float32, numpy=2.3222432> |
Which means $A=4.68$, $B=4.32$, $C=3.32$, and $D=2.32$. You can verify this solution fits the problem.
The above code defines the four unknowns as variables with a random initial value. Then you compute the result of the four equations and compare it to the expected answer. You then sum up the squared error and ask TensorFlow to minimize it. The minimum possible square error is zero, attained when our solution exactly fits the problem.
Note the way the gradient tape is asked to produce the gradient: You ask the gradient of sqerr
respective to A
, B
, C
, and D
. Hence four gradients are found. You then apply each gradient to the respective variables in each iteration. Rather than looking for the gradient in four different calls to tape.gradient()
, this is required in TensorFlow because the gradient sqerr
can only be recalled once by default.
Further Reading
This section provides more resources on the topic if you are looking to go deeper.
Articles:
Summary
In this post, we demonstrated how TensorFlow’s automatic differentiation works. This is the building block for carrying out deep learning training. Specifically, you learned:
- What is automatic differentiation in TensorFlow
- How you can use gradient tape to carry out automatic differentiation
- How you can use automatic differentiation to solve an optimization problem
No comments yet.