# Manipulating Tensors in PyTorch

PyTorch is a deep-learning library. Just like some other deep learning libraries, it applies operations on numerical arrays called tensors. In the simplest terms, tensors are just multidimensional arrays. When we deal with the tensors, some operations are used very often. In PyTorch, there are some functions defined specifically for dealing with tensors.

In the following, we will give a brief overview of what PyTorch provides on tensors and how we can use them. After finishing this tutorial, you will know:

• How to create and operate on PyTorch tensors
• PyTorch’s tensor syntax is similar to NumPy
• The common functions you can use from PyTorch to manipulate a tensor

Kick-start your project with my book Deep Learning with PyTorch. It provides self-study tutorials with working code.

Let’s get started.

Manipulating tensors in PyTorch. Photo by Big Dodzy. Some rights reserved.

## Overview

This tutorial is in four parts; they are:

• Creating Tensors
• Checking a Tensor
• Manipulating Tensors
• Tensor Functions

## Creating Tensors

If you’re familiar with NumPy, you should recall that there are multiple ways of creating an array. The same is true in PyTorch for creating tensors. The simplest way to create a specific constant matrix like the following:

$$\begin{bmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \end{bmatrix}$$

is by using:

It prints:

The dtype argument specifies the data type of the values in the tensor. It is optional. You can also provide the values from a NumPy array and convert it to a PyTorch tensor.

Usually, you would create a tensor for some specific purpose. For example, if you want to have ten values evenly distributed between -1 and 1, you can use the linspace() function:

It prints:

However, if you want to have a tensor of random values (which is very useful in testing your functions), you can make one like the following:

It prints, for example:

This resulting tensor is of dimension $3\times 4$, and each value is uniformly distributed between 0 and 1. If you want to have the values normally distributed, just change the function to randn():

If you want to have the random values be an integer, e.g., between 3 to 10, you can use the randint() function:

This will give, for example:

The values are in the range $3 \le x < 10$. By default, the lower bound is zero, so if you want the values to be $0 \le x < 10$, you can use:

The other commonly used tensors are the zero tensor and tensors with all values the same. To create a zero tensor (e.g., of dimension $2\times 3\times 4$), you can use:

It prints:

And to create a tensor of all values are 5, you can use:

It prints:

But if you want all values to be one, there is a simpler function:

Finally, if you want an identity matrix, you can get it with diag() or eye():

It prints:

### Want to Get Started With Deep Learning with PyTorch?

Take my free email crash course now (with sample code).

Click to sign-up and also get a free PDF Ebook version of the course.

## Checking a Tensor

Once you have a tensor and you want to know more about it, you can simply print it to the screen using print(). But if the tensor is too big, it is easier to show its dimension by checking its shape:

It prints:

The shape of a tensor can be accessed using the shape property or the size() function. If you want to see how many dimensions you have (i.e., $2\times 3\times 4$ is 3 and $3\times 4$ is 2), you can read the ndim property:

This will give you “3”. If you use len() to check a tensor, it will only give you the size of the first dimension, e.g.:

It prints:

Another property that you want to learn about a tensor is its data type. Usually, you use floating points in deep learning, but sometimes, the tensors should be in integers (e.g., in an image as pixel values). To check the data type, you can read the dtype property:

It prints:

If you want to change the data type, you can recreate the tensor with a new type:

The above prints:

## Manipulating Tensors

One common operation on tensors in deep learning is to change the tensor shape. For example, you may want to convert a 2D tensor into 1D or add a dummy dimension to a tensor. You may also want to extract a sub-tensor from a larger tensor.

For example, you can create a tensor like the following:

If you get:

It allows you to take a slice using the same syntax as in NumPy:

This will be:

Or if you use:

It will be:

You can also make use of the same slicing syntax to add a new dimension. For example,

You will see:

Here you use None to insert a new dimension at a specific place. This is useful if, for example, you need to convert an image into a batch of only one image. If you’re familiar with NumPy, you may recall there is a function expand_dims() for this purpose, but PyTorch doesn’t provide it. A similar function is unsqueeze(), which is demonstrated below:

This prints:

One powerful nature of NumPy slicing syntax is Boolean indexing. This is also supported with PyTorch tensors. For example:

You may see:

The above selects the columns where all elements are greater than -1. You can also manipulate the tensor by selecting specific columns:

This results in:

To convert a 2D tensor into 1D, you can use:

The result will be:

You may also use theÂ  reshape()Â function to achieve the same:

The result should be the same as that of ravel(). But usually, the reshape() function is for more complicated target shapes:

This will print:

One common case of reshaping tensors is to do matrix transpose. For a 2D matrix, it is easily done in the same way as NumPy:

which prints:

But the transpose() function in PyTorch requires you to specify which axes to swap explicitly:

This result is same as above. If you have multiple tensors, you can combine them by stacking them (vstack() for vertically and hstack() for horizontally). For example:

This may print:

The concatenate function is similar:

You will get the same tensor:

The reverse is to split, e.g.,

It prints

This function tells how many tensors to split into, rather than what size each tensor is. The latter is indeed more useful in deep learning (e.g., to split a tensor of a large dataset into many tensors of small batches). The equivalent function would be:

This should give you the same result as before. So split(c, 3, dim=0)Â means to split on dimension 0 such that each resulting tensor will be of size 3.

## Tensor Functions

PyTorch tensors can be treated as arrays. So you can often use it in a similar way as NumPy arrays. For example, you have the functions of common mathematical functions:

This prints:

Note that if a function is undefined (e.g., square root of negative numbers), nan will be the result, but no exception will be raised. In PyTorch, you have a function to check if the values of a tensor are nan:

You will get:

Indeed, besides these defined functions, the Python operators can be applied to the tensors too:

You get:

But among the operators, matrix multiplications are very important in deep learning. You can do this with:

This prints

These two are the same. Indeed, the @ operator from Python can also be used for vector dot-product, e.g.:

It prints:

If you treat the values in a tensor as samples, you may also want to find some statistics about it. Some are provided in PyTorch too:

It prints:

But for linear algebra functions, you should find it in PyTorch’s linalg submodule. For example:

You will see:

And specifically for convolution neural networks, padding a tensor is done with the following:

This prints:

This example of the pad() function is to create (1,1) padding on dimension 0 and (0,2) on dimension 1. In other words, for each dimension 0 (rows), we add one dummy value (0) at the beginning and the end. For each dimension 1 (columns), we add zero dummy values at the beginning but two dummy values at the end.

Finally, since PyTorch tensors can be considered arrays, you can use them directly with other tools such as matplotlib. Below is an example of plotting a surface using PyTorch tensors:

The mesh grid produced the xx tensor as:

And the plot created is:

## Summary

In this tutorial, you discovered how to manipulate PyTorch tensors. Specifically, you learned:

• What is a tensor
• How to create various kinds of tensors in PyTorch
• How to reshape, slice, and manipulate tensors in PyTorch
• The common functions that can be applied to PyTorch tensors

## Get Started on Deep Learning with PyTorch!

#### Learn how to build deep learning models

...using the newly released PyTorch 2.0 library

Discover how in my new Ebook:
Deep Learning with PyTorch

It provides self-study tutorials with hundreds of working code to turn you from a novice to expert. It equips you with
tensor operation, training, evaluation, hyperparameter optimization, and much more...

### 5 Responses to Manipulating Tensors in PyTorch

1. John William O'Meara June 17, 2023 at 5:41 am #

The line:
“z = torch.sqrt(1 – xx**2 – (yy/2)**2)”
returns a tensor with NaNs

• James Carmichael June 17, 2023 at 10:54 am #

Hi John…We will look in to this, however it would be helpful to know if you typed the code or copied and pasted it?

2. John William O'Meara June 17, 2023 at 9:11 pm #

Hi James,
I typed it. I played around with it a bit, as it appears that getting the square root of a negative number for the value of Z is the primary issue.
I inserted abs() inside torch.sqrt() and it gave a plot mostly similar to that shown, except with some laminar peaks at the vour corners.

• James Carmichael June 18, 2023 at 8:14 am #

Hi John…Thank you for the update! That will likely help others as well.

3. John William O'Meara June 20, 2023 at 3:11 am #

Hi,
Were you able to find a solution to this?