Using Autograd in TensorFlow to Solve a Regression Problem

We usually use TensorFlow to build a neural network. However, TensorFlow is not limited to this. Behind the scenes, TensorFlow is a tensor library with automatic differentiation capability. Hence you can easily use it to solve a numerical optimization problem with gradient descent. In this post, you will learn how TensorFlow’s automatic differentiation engine, autograd, works.

After finishing this tutorial, you will learn:

  • What is autograd in TensorFlow
  • How to make use of autograd and an optimizer to solve an optimization problem

Let’s get started.

Using autograd in TensorFlow to solve a regression problem
Photo by Lukas Tennie. Some rights reserved.

Overview

This tutorial is in three parts; they are:

  • Autograd in TensorFlow
  • Using Autograd for Polynomial Regression
  • Using Autograd to Solve a Math Puzzle

Autograd in TensorFlow

In TensorFlow 2.x, you can define variables and constants as TensorFlow objects and build an expression with them. The expression is essentially a function of the variables. Hence you may derive its derivative function, i.e., the differentiation or the gradient. This feature is one of the many fundamental features in TensorFlow. The deep learning model will make use of this in the training loop.

It is easier to explain autograd with an example. In TensorFlow 2.x, you can create a constant matrix as follows:

The above prints:

This creates an integer vector (in the form of a Tensor object). This vector can work like a NumPy vector in most cases. For example, you can do x+x or 2*x, and the result is just what you would expect. TensorFlow comes with many functions for array manipulation that match NumPy, such as tf.transpose or tf.concat.

Creating variables in TensorFlow is just the same, for example:

This will print:

The operations (such as x+x and 2*x) that you can apply to Tensor objects can also be applied to variables. The only difference between variables and constants is the former allows the value to change while the latter is immutable. This distinction is important when you run a gradient tape as follows:

This prints:

What it does is the following: This defined a variable x (with value 3.6) and then created a gradient tape. While the gradient tape is working, it computes y=x*x or $y=x^2$. The gradient tape monitored how the variables are manipulated. Afterward, you asked the gradient tape to find the derivative $\dfrac{dy}{dx}$. You know $y=x^2$ means $y’=2x$. Hence the output would give you a value of $3.6\times 2=7.2$.

Using Autograd for Polynomial Regression

How is this feature in TensorFlow helpful? Let’s consider a case where you have a polynomial in the form of $y=f(x)$, and you are given several $(x,y)$ samples. How can you recover the polynomial $f(x)$? One way is to assume a random coefficient for the polynomial and feed in the samples $(x,y)$. If the polynomial is found, you should see the value of $y$ matches $f(x)$. The closer they are, the closer your estimate is to the correct polynomial.

This is indeed a numerical optimization problem such that you want to minimize the difference between $y$ and $f(x)$. You can use gradient descent to solve it.

Let’s consider an example. You can build a polynomial $f(x)=x^2 + 2x + 3$ in NumPy as follows:

This prints:

You may use the polynomial as a function, such as:

And this prints 8.25, for $(1.5)^2+2\times(1.5)+3 = 8.25$.

Now you can generate a number of samples from this function using NumPy:

In the above, both X and Y are NumPy arrays of the shape (20,1), and they are related as $y=f(x)$ for the polynomial $f(x)$.

Now, assume you do not know what the polynomial is, except it is quadratic. And you want to recover the coefficients. Since a quadratic polynomial is in the form of $Ax^2+Bx+C$, you have three unknowns to find. You can find them using the gradient descent algorithm you implement or an existing gradient descent optimizer. The following demonstrates how it works:

The print statement before the for loop gives three random numbers, such as:

But the one after the for loop gives you the coefficients very close to that in the polynomial:

What the above code does is the following: First, it creates a variable vector w of 3 values, namely the coefficients $A,B,C$. Then you create an array of shape $(N,3)$, in which $N$ is the number of samples in the array X. This array has 3 columns, which are the values of $x^2$, $x$, and 1, respectively. Such an array is built from the vector X using the  np.hstack() function. Similarly, we build the TensorFlow constant y from the NumPy array Y.

Afterward, you use a for loop to run gradient descent in 1,000 iterations. In each iteration, you compute $x \times w$ in matrix form to find $Ax^2+Bx+C$ and assign it to the variable y_pred. Then, compare y and y_pred and find the mean square error. Next, derive the gradient, i.e., the rate of change of the mean square error with respect to the coefficients w. And based on this gradient, you use gradient descent to update w.

In essence, the above code is to find the coefficients w that minimizes the mean square error.

Putting everything together, the following is the complete code:

Using autograd to Solve a Math Puzzle

In the above, 20 samples were used, which is more than enough to fit a quadratic equation. You may use gradient descent to solve some math puzzles as well. For example, the following problem:

In other words,  to find the values of $A,B,C,D$ such that:

$$\begin{aligned}
A + B &= 9 \\
C – D &= 1 \\
A + C &= 8 \\
B – D &= 2
\end{aligned}$$

This can also be solved using autograd, as follows:

There can be multiple solutions to this problem. One solution is the following:

Which means $A=4.68$, $B=4.32$, $C=3.32$, and $D=2.32$. You can verify this solution fits the problem.

The above code defines the four unknowns as variables with a random initial value. Then you compute the result of the four equations and compare it to the expected answer. You then sum up the squared error and ask TensorFlow to minimize it. The minimum possible square error is zero, attained when our solution exactly fits the problem.

Note the way the gradient tape is asked to produce the gradient: You ask the gradient of sqerr respective to A, B, C, and D. Hence four gradients are found. You then apply each gradient to the respective variables in each iteration. Rather than looking for the gradient in four different calls to tape.gradient(), this is required in TensorFlow because the gradient sqerr can only be recalled once by default.

Further Reading

This section provides more resources on the topic if you are looking to go deeper.

Articles:

Summary

In this post, we demonstrated how TensorFlow’s automatic differentiation works. This is the building block for carrying out deep learning training. Specifically, you learned:

  • What is automatic differentiation in TensorFlow
  • How you can use gradient tape to carry out automatic differentiation
  • How you can use automatic differentiation to solve an optimization problem

No comments yet.

Leave a Reply