Training a Linear Regression Model in PyTorch

Linear regression is a simple yet powerful technique for predicting the values of variables based on other variables. It is often used for modeling relationships between two or more continuous variables, such as the relationship between income and age, or the relationship between weight and height. Likewise, linear regression can be used to predict continuous outcomes such as price or quantity demand, based on other variables that are known to influence these outcomes.

In order to train a linear regression model, we need to define a cost function and an optimizer. The cost function is used to measure how well our model fits the data, while the optimizer decides which direction to move in order to improve this fit.

While in the previous tutorial you learned how we can make simple predictions with only a linear regression forward pass, here you’ll train a linear regression model and update its learning parameters using PyTorch. Particularly, you’ll learn:

  • How you can build a simple linear regression model from scratch in PyTorch.
  • How you can apply a simple linear regression model on a dataset.
  • How a simple linear regression model can be trained on a single learnable parameter.
  • How a simple linear regression model can be trained on two learnable parameters.

Kick-start your project with my book Deep Learning with PyTorch. It provides self-study tutorials with working code.


So, let’s get started.

Training a Linear Regression Model in PyTorch.
Picture by Ryan Tasto. Some rights reserved.

Overview

This tutorial is in four parts; they are

  • Preparing Data
  • Building the Model and Loss Function
  • Training the Model for a Single Parameter
  • Training the Model for Two Parameters

Preparing Data

Let’s import a few libraries we’ll use in this tutorial and make some data for our experiments.

We will use synthetic data to train the linear regression model. We’ll initialize a variable X with values from $-5$ to $5$ and create a linear function that has a slope of $-5$. Note that this function will be estimated by our trained model later.

Also, we’ll see how our data looks like in a line plot, using matplotlib.

Plot of the linear function

As we need to simulate the real data we just created, let’s add some Gaussian noise to it in order to create noisy data of the same size as $X$, keeping the value of standard deviation at 0.4. This will be done by using torch.randn(X.size()).

Now, let’s visualize these data points using below lines of code.

Data points and the linear function

Putting all together, the following is the complete code.

Building the Model and Loss Function

We created the data to feed into the model, next we’ll build a forward function based on a simple linear regression equation. Note that we’ll build the model to train only a single parameter ($w$) here. Later, in the sext section of the tutorial, we’ll add the bias and train the model for two parameters ($w$ and $b$). The function for the forward pass of the model is defined as follows:

In training steps, we’ll need a criterion to measure the loss between the original and the predicted data points. This information is crucial for gradient descent optimization operations of the model and updated after every iteration in order to calculate the gradients and minimize the loss. Usually, linear regression is used for continuous data where Mean Square Error (MSE) effectively calculates the model loss. Therefore MSE metric is the criterion function we use here.

Want to Get Started With Deep Learning with PyTorch?

Take my free email crash course now (with sample code).

Click to sign-up and also get a free PDF Ebook version of the course.

Training the Model for a Single Parameter

With all these preparations, we are ready for model training. First, the parameter $w$ need to be initialized randomly, for example, to the value $-10$.

Next, we’ll define the learning rate or the step size, an empty list to store the loss after each iteration, and the number of iterations we want our model to train for. While the step size is set at 0.1, we train the model for 20 iterations per epochs.

When below lines of code is executed, the forward() function takes an input and generates a prediction. The criterian() function calculates the loss and stores it in loss variable. Based on the model loss, the backward() method computes the gradients and w.data stores the updated parameters.

The output of the model training is printed as under. As you can see, model loss reduces after every iteration and the trainable parameter (which in this case is $w$) is updated.

Let’s also visualize via the plot to see how the loss reduces.

Training loss vs epochs

Putting everything together, the following is the complete code:

Training the Model for Two Parameters

Let’s also add bias $b$ to our model and train it for two parameters. First we need to change the forward function to as follows.

As we have two parameters $w$ and $b$, we need to initialize both to some random values, such as below.

While all the other code for training will remain the same as before, we’ll only have to make a few changes for two learnable parameters.

Keeping learning rate at 0.1, lets train our model for two parameters for 20 iterations/epochs.

Here is what we get for output.

Similarly we can plot the loss history.

And here is how the plot for the loss looks like.

History of loss for training with two parameters

Putting everything together, this is the complete code.

Summary

In this tutorial you learned how you can build and train a simple linear regression model in PyTorch. Particularly, you learned.

  • How you can build a simple linear regression model from scratch in PyTorch.
  • How you can apply a simple linear regression model on a dataset.
  • How a simple linear regression model can be trained on a single learnable parameter.
  • How a simple linear regression model can be trained on two learnable parameters.

Get Started on Deep Learning with PyTorch!

Deep Learning with PyTorch

Learn how to build deep learning models

...using the newly released PyTorch 2.0 library

Discover how in my new Ebook:
Deep Learning with PyTorch

It provides self-study tutorials with hundreds of working code to turn you from a novice to expert. It equips you with
tensor operation, training, evaluation, hyperparameter optimization, and much more...

Kick-start your deep learning journey with hands-on exercises


See What's Inside

2 Responses to Training a Linear Regression Model in PyTorch

  1. Avatar
    Stephen December 14, 2022 at 8:57 am #

    I love seeing examples like this. So often we go straight to the largest and most complicated models. A linear regression is extremely powerful so so many problems. Not to mention linear regression is interpretable while deep neural nets are not. I think for the majority of problems could have a preliminary step where we apply regression methods and evaluate the learning and predictions. Great job.

    • Avatar
      James Carmichael December 14, 2022 at 9:20 am #

      Thank you for the feedback and support Stephen! We greatly appreciate it!

Leave a Reply