The post Calculating Derivatives in PyTorch appeared first on Machine Learning Mastery.

]]>While we always have to deal with backpropagation (an algorithm known to be the backbone of a neural network) in neural networks, which optimizes the parameters to minimize the error in order to achieve higher classification accuracy; concepts learned in this article will be used in later posts on deep learning for image processing and other computer vision problems.

After going through this tutorial, you’ll learn:

- How to calculate derivatives in PyTorch.
- How to use autograd in PyTorch to perform auto differentiation on tensors.
- About the computation graph that involves different nodes and leaves, allowing you to calculate the gradients in a simple possible manner (using the chain rule).
- How to calculate partial derivatives in PyTorch.
- How to implement the derivative of functions with respect to multiple values.

Let’s get started.

The autograd – an auto differentiation module in PyTorch – is used to calculate the derivatives and optimize the parameters in neural networks. It is intended primarily for gradient computations.

Before we start, let’s load up some necessary libraries we’ll use in this tutorial.

import matplotlib.pyplot as plt import torch

Now, let’s use a simple tensor and set the `requires_grad`

parameter to true. This allows us to perform automatic differentiation and lets PyTorch evaluate the derivatives using the given value which, in this case, is 3.0.

x = torch.tensor(3.0, requires_grad = True) print("creating a tensor x: ", x)

creating a tensor x: tensor(3., requires_grad=True)

We’ll use a simple equation $y=3x^2$ as an example and take the derivative with respect to variable `x`

. So, let’s create another tensor according to the given equation. Also, we’ll apply a neat method `.backward`

on the variable `y`

that forms acyclic graph storing the computation history, and evaluate the result with `.grad`

for the given value.

y = 3 * x ** 2 print("Result of the equation is: ", y) y.backward() print("Dervative of the equation at x = 3 is: ", x.grad)

Result of the equation is: tensor(27., grad_fn=<MulBackward0>) Dervative of the equation at x = 3 is: tensor(18.)

As you can see, we have obtained a value of 18, which is correct.

PyTorch generates derivatives by building a backwards graph behind the scenes, while tensors and backwards functions are the graph’s nodes. In a graph, PyTorch computes the derivative of a tensor depending on whether it is a leaf or not.

PyTorch will not evaluate a tensor’s derivative if its leaf attribute is set to True. We won’t go into much detail about how the backwards graph is created and utilized, because the goal here is to give you a high-level knowledge of how PyTorch makes use of the graph to calculate derivatives.

So, let’s check how the tensors `x`

and `y`

look internally once they are created. For `x`

:

print('data attribute of the tensor:',x.data) print('grad attribute of the tensor::',x.grad) print('grad_fn attribute of the tensor::',x.grad_fn) print("is_leaf attribute of the tensor::",x.is_leaf) print("requires_grad attribute of the tensor::",x.requires_grad)

data attribute of the tensor: tensor(3.) grad attribute of the tensor:: tensor(18.) grad_fn attribute of the tensor:: None is_leaf attribute of the tensor:: True requires_grad attribute of the tensor:: True

and for `y`

:

print('data attribute of the tensor:',y.data) print('grad attribute of the tensor:',y.grad) print('grad_fn attribute of the tensor:',y.grad_fn) print("is_leaf attribute of the tensor:",y.is_leaf) print("requires_grad attribute of the tensor:",y.requires_grad)

print('data attribute of the tensor:',y.data) print('grad attribute of the tensor:',y.grad) print('grad_fn attribute of the tensor:',y.grad_fn) print("is_leaf attribute of the tensor:",y.is_leaf) print("requires_grad attribute of the tensor:",y.requires_grad)

As you can see, each tensor has been assigned with a particular set of attributes.

The `data`

attribute stores the tensor’s data while the `grad_fn`

attribute tells about the node in the graph. Likewise, the `.grad`

attribute holds the result of the derivative. Now that you have learnt some basics about the autograd and computational graph in PyTorch, let’s take a little more complicated equation $y=6x^2+2x+4$ and calculate the derivative. The derivative of the equation is given by:

$$\frac{dy}{dx} = 12x+2$$

Evaluating the derivative at $x = 3$,

$$\left.\frac{dy}{dx}\right\vert_{x=3} = 12\times 3+2 = 38$$

Now, let’s see how PyTorch does that,

x = torch.tensor(3.0, requires_grad = True) y = 6 * x ** 2 + 2 * x + 4 print("Result of the equation is: ", y) y.backward() print("Derivative of the equation at x = 3 is: ", x.grad)

Result of the equation is: tensor(64., grad_fn=<AddBackward0>) Derivative of the equation at x = 3 is: tensor(38.)

The derivative of the equation is 38, which is correct.

PyTorch also allows us to calculate partial derivatives of functions. For example, if we have to apply partial derivation to the following function,

$$f(u,v) = u^3+v^2+4uv$$

Its derivative with respect to $u$ is,

$$\frac{\partial f}{\partial u} = 3u^2 + 4v$$

Similarly, the derivative with respect to $v$ will be,

$$\frac{\partial f}{\partial v} = 2v + 4u$$

Now, let’s do it the PyTorch way, where $u = 3$ and $v = 4$.

We’ll create `u`

, `v`

and `f`

tensors and apply the `.backward`

attribute on `f`

in order to compute the derivative. Finally, we’ll evaluate the derivative using the `.grad`

with respect to the values of `u`

and `v`

.

u = torch.tensor(3., requires_grad=True) v = torch.tensor(4., requires_grad=True) f = u**3 + v**2 + 4*u*v print(u) print(v) print(f) f.backward() print("Partial derivative with respect to u: ", u.grad) print("Partial derivative with respect to v: ", v.grad)

tensor(3., requires_grad=True) tensor(4., requires_grad=True) tensor(91., grad_fn=<AddBackward0>) Partial derivative with respect to u: tensor(43.) Partial derivative with respect to v: tensor(20.)

What if we have a function with multiple values and we need to calculate the derivative with respect to its multiple values? For this, we’ll make use of the sum attribute to (1) produce a scalar-valued function, and then (2) take the derivative. This is how we can see the ‘function vs. derivative’ plot:

# compute the derivative of the function with multiple values x = torch.linspace(-20, 20, 20, requires_grad = True) Y = x ** 2 y = torch.sum(Y) y.backward() # ploting the function and derivative function_line, = plt.plot(x.detach().numpy(), Y.detach().numpy(), label = 'Function') function_line.set_color("red") derivative_line, = plt.plot(x.detach().numpy(), x.grad.detach().numpy(), label = 'Derivative') derivative_line.set_color("green") plt.xlabel('x') plt.legend() plt.show()

In the two `plot()`

function above, we extract the values from PyTorch tensors so we can visualize them. The `.detach`

method doesn’t allow the graph to further track the operations. This makes it easy for us to convert a tensor to a numpy array.

In this tutorial, you learned how to implement derivatives on various functions in PyTorch.

Particularly, you learned:

- How to calculate derivatives in PyTorch.
- How to use autograd in PyTorch to perform auto differentiation on tensors.
- About the computation graph that involves different nodes and leaves, allowing you to calculate the gradients in a simple possible manner (using the chain rule).
- How to calculate partial derivatives in PyTorch.
- How to implement the derivative of functions with respect to multiple values.

The post Calculating Derivatives in PyTorch appeared first on Machine Learning Mastery.

]]>The post Two-Dimensional Tensors in Pytorch appeared first on Machine Learning Mastery.

]]>Let’s take a gray-scale image as an example, which is a two-dimensional matrix of numeric values, commonly known as pixels. Ranging from ‘0’ to ‘255’, each number represents a pixel intensity value. Here, the lowest intensity number (which is ‘0’) represents black regions in the image while the highest intensity number (which is ‘255’) represents white regions in the image. Using the PyTorch framework, this two-dimensional image or matrix can be converted to a two-dimensional tensor.

In the previous post, we learned about one-dimensional tensors in PyTorch and applied some useful tensor operations. In this tutorial, we’ll apply those operations to two-dimensional tensors using the PyTorch library. Specifically, we’ll learn:

- How to create two-dimensional tensors in PyTorch and explore their types and shapes.
- About slicing and indexing operations on two-dimensional tensors in detail.
- To apply a number of methods to tensors such as, tensor addition, multiplication, and more.

Let’s get started.

This tutorial is divided into parts; they are:

- Types and shapes of two-dimensional tensors
- Converting two-dimensional tensors into NumPy arrays
- Converting pandas series to two-dimensional tensors
- Indexing and slicing operations on two-dimensional tensors
- Operations on two-dimensional tensors

Let’s first import a few necessary libraries we’ll use in this tutorial.

import torch import numpy as np import pandas as pd

To check the types and shapes of the two-dimensional tensors, we’ll use the same methods from PyTorch, introduced previously for one-dimensional tensors. But, should it work the same way it did for the one-dimensional tensors?

Let’s demonstrate by converting a 2D list of integers to a 2D tensor object. As an example, we’ll create a 2D list and apply `torch.tensor()`

for conversion.

example_2D_list = [[5, 10, 15, 20], [25, 30, 35, 40], [45, 50, 55, 60]] list_to_tensor = torch.tensor(example_2D_list) print("Our New 2D Tensor from 2D List is: ", list_to_tensor)

Our New 2D Tensor from 2D List is: tensor([[ 5, 10, 15, 20], [25, 30, 35, 40], [45, 50, 55, 60]])

As you can see, the `torch.tensor()`

method also works well for the two-dimensional tensors. Now, let’s use `shape()`

, `size()`

, and `ndimension()`

methods to return the shape, size, and dimensions of a tensor object.

print("Getting the shape of tensor object: ", list_to_tensor.shape) print("Getting the size of tensor object: ", list_to_tensor.size()) print("Getting the dimensions of tensor object: ", list_to_tensor.ndimension())

print("Getting the shape of tensor object: ", list_to_tensor.shape) print("Getting the size of tensor object: ", list_to_tensor.size()) print("Getting the dimensions of tensor object: ", list_to_tensor.ndimension())

PyTorch allows us to convert a two-dimensional tensor to a NumPy array and then back to a tensor. Let’s find out how.

# Converting two_D tensor to numpy array twoD_tensor_to_numpy = list_to_tensor.numpy() print("Converting two_Dimensional tensor to numpy array:") print("Numpy array after conversion: ", twoD_tensor_to_numpy) print("Data type after conversion: ", twoD_tensor_to_numpy.dtype) print("***************************************************************") # Converting numpy array back to a tensor back_to_tensor = torch.from_numpy(twoD_tensor_to_numpy) print("Converting numpy array back to two_Dimensional tensor:") print("Tensor after conversion:", back_to_tensor) print("Data type after conversion: ", back_to_tensor.dtype)

Converting two_Dimensional tensor to numpy array: Numpy array after conversion: [[ 5 10 15 20] [25 30 35 40] [45 50 55 60]] Data type after conversion: int64 *************************************************************** Converting numpy array back to two_Dimensional tensor: Tensor after conversion: tensor([[ 5, 10, 15, 20], [25, 30, 35, 40], [45, 50, 55, 60]]) Data type after conversion: torch.int64

Similarly, we can also convert a pandas DataFrame to a tensor. As with the one-dimensional tensors, we’ll use the same steps for the conversion. Using values attribute we’ll get the NumPy array and then use `torch.from_numpy`

that allows you to convert a pandas DataFrame to a tensor.

Here is how we’ll do it.

# Converting Pandas Dataframe to a Tensor dataframe = pd.DataFrame({'x':[22,24,26],'y':[42,52,62]}) print("Pandas to numpy conversion: ", dataframe.values) print("Data type before tensor conversion: ", dataframe.values.dtype) print("***********************************************") pandas_to_tensor = torch.from_numpy(dataframe.values) print("Getting new tensor: ", pandas_to_tensor) print("Data type after conversion to tensor: ", pandas_to_tensor.dtype)

Pandas to numpy conversion: [[22 42] [24 52] [26 62]] Data type before tensor conversion: int64 *********************************************** Getting new tensor: tensor([[22, 42], [24, 52], [26, 62]]) Data type after conversion to tensor: torch.int64

For indexing operations, different elements in a tensor object can be accessed using square brackets. You can simply put corresponding indices in square brackets to access the desired elements in a tensor.

In the below example, we’ll create a tensor and access certain elements using two different methods. Note that the index value should always be one less than where the element is located in a two-dimensional tensor.

example_tensor = torch.tensor([[10, 20, 30, 40], [50, 60, 70, 80], [90, 100, 110, 120]]) print("Accessing element in 2nd row and 2nd column: ", example_tensor[1, 1]) print("Accessing element in 2nd row and 2nd column: ", example_tensor[1][1]) print("********************************************************") print("Accessing element in 3rd row and 4th column: ", example_tensor[2, 3]) print("Accessing element in 3rd row and 4th column: ", example_tensor[2][3])

Accessing element in 2nd row and 2nd column: tensor(60) Accessing element in 2nd row and 2nd column: tensor(60) ******************************************************** Accessing element in 3rd row and 4th column: tensor(120) Accessing element in 3rd row and 4th column: tensor(120)

What if we need to access two or more elements at the same time? That’s where tensor slicing comes into play. Let’s use the previous example to access first two elements of the second row and first three elements of the third row.

example_tensor = torch.tensor([[10, 20, 30, 40], [50, 60, 70, 80], [90, 100, 110, 120]]) print("Accessing first two elements of the second row: ", example_tensor[1, 0:2]) print("Accessing first two elements of the second row: ", example_tensor[1][0:2]) print("********************************************************") print("Accessing first three elements of the third row: ", example_tensor[2, 0:3]) print("Accessing first three elements of the third row: ", example_tensor[2][0:3])

example_tensor = torch.tensor([[10, 20, 30, 40], [50, 60, 70, 80], [90, 100, 110, 120]]) print("Accessing first two elements of the second row: ", example_tensor[1, 0:2]) print("Accessing first two elements of the second row: ", example_tensor[1][0:2]) print("********************************************************") print("Accessing first three elements of the third row: ", example_tensor[2, 0:3]) print("Accessing first three elements of the third row: ", example_tensor[2][0:3])

While there are a lot of operations you can apply on two-dimensional tensors using the PyTorch framework, here, we’ll introduce you to tensor addition, and scalar and matrix multiplication.

Adding two tensors is similar to matrix addition. It’s quite a straight forward process as you simply need an addition (+) operator to perform the operation. Let’s add two tensors in the below example.

A = torch.tensor([[5, 10], [50, 60], [100, 200]]) B = torch.tensor([[10, 20], [60, 70], [200, 300]]) add = A + B print("Adding A and B to get: ", add)

Adding A and B to get: tensor([[ 15, 30], [110, 130], [300, 500]])

Scalar multiplication in two-dimensional tensors is also identical to scalar multiplication in matrices. For instance, by multiplying a tensor with a scalar, say a scalar 4, you’ll be multiplying every element in a tensor by 4.

new_tensor = torch.tensor([[1, 2, 3], [4, 5, 6]]) mul_scalar = 4 * new_tensor print("result of scalar multiplication: ", mul_scalar)

result of scalar multiplication: tensor([[ 4, 8, 12], [16, 20, 24]])

Coming to the multiplication of the two-dimensional tensors, `torch.mm()`

in PyTorch makes things easier for us. Similar to the matrix multiplication in linear algebra, number of columns in tensor object A (i.e. 2×3) must be equal to the number of rows in tensor object B (i.e. 3×2).

A = torch.tensor([[3, 2, 1], [1, 2, 1]]) B = torch.tensor([[3, 2], [1, 1], [2, 1]]) A_mult_B = torch.mm(A, B) print("multiplying A with B: ", A_mult_B)

multiplying A with B: tensor([[13, 9], [ 7, 5]])

Developed at the same time as TensorFlow, PyTorch used to have a simpler syntax until TensorFlow adopted Keras in its 2.x version. To learn the basics of PyTorch, you may want to read the PyTorch tutorials:

Especially the basics of PyTorch tensor can be found in the Tensor tutorial page:

There are also quite a few books on PyTorch that are suitable for beginners. A more recently published book should be recommended as the tools and syntax are actively evolving. One example is

- Deep Learning with PyTorch by Eli Stevens, Luca Antiga, and Thomas Viehmann, 2020.

https://www.manning.com/books/deep-learning-with-pytorch

In this tutorial, you learned about two-dimensional tensors in PyTorch.

Specifically, you learned:

- How to create two-dimensional tensors in PyTorch and explore their types and shapes.
- About slicing and indexing operations on two-dimensional tensors in detail.
- To apply a number of methods to tensors such as, tensor addition, multiplication, and more.

The post Two-Dimensional Tensors in Pytorch appeared first on Machine Learning Mastery.

]]>The post One-Dimensional Tensors in Pytorch appeared first on Machine Learning Mastery.

]]>PyTorch is primarily focused on tensor operations while a tensor can be a number, matrix, or a multi-dimensional array.

In this tutorial, we will perform some basic operations on one-dimensional tensors as they are complex mathematical objects and an essential part of the PyTorch library. Therefore, before going into the detail and more advanced concepts, one should know the basics.

After going through this tutorial, you will:

- Understand the basics of one-dimensional tensor operations in PyTorch.
- Know about tensor types and shapes and perform tensor slicing and indexing operations.
- Be able to apply some methods on tensor objects, such as mean, standard deviation, addition, multiplication, and more.

Let’s get started.

First off, let’s import a few libraries we’ll use in this tutorial.

import torch import numpy as np import pandas as pd

If you have experience in other programming languages, the easiest way to understand a tensor is to consider it as a multidimensional array. Therefore, a one-dimensional tensor is simply a one-dimensional array, or a vector. In order to convert a list of integers to tensor, apply `torch.tensor()`

constructor. For instance, we’ll take a list of integers and convert it to various tensor objects.

int_to_tensor = torch.tensor([10, 11, 12, 13]) print("Tensor object type after conversion: ", int_to_tensor.dtype) print("Tensor object type after conversion: ", int_to_tensor.type())

Tensor object type after conversion: torch.int64 Tensor object type after conversion: torch.LongTensor

Also, you can apply the same method torch.tensor() to convert a float list to a float tensor.

float_to_tensor = torch.tensor([10.0, 11.0, 12.0, 13.0]) print("Tensor object type after conversion: ", float_to_tensor.dtype) print("Tensor object type after conversion: ", float_to_tensor.type())

Tensor object type after conversion: torch.float32 Tensor object type after conversion: torch.FloatTensor

Note that elements of a list that need to be converted into a tensor must have the same type. Moreover, if you want to convert a list to a certain tensor type, torch also allows you to do that. The code lines below, for example, will convert a list of integers to a float tensor.

int_list_to_float_tensor = torch.FloatTensor([10, 11, 12, 13]) int_list_to_float_tensor.type() print("Tensor type after conversion: ", int_list_to_float_tensor.type())

Tensor type after conversion: torch.FloatTensor

Similarly, `size()`

and `ndimension()`

methods allow you to find the size and dimensions of a tensor object.

print("Size of the int_list_to_float_tensor: ", int_list_to_float_tensor.size()) print("Dimensions of the int_list_to_float_tensor: ",int_list_to_float_tensor.ndimension())

Size of the int_list_to_float_tensor: torch.Size([4]) Dimensions of the int_list_to_float_tensor: 1

For reshaping a tensor object, `view()`

method can be applied. It takes `rows`

and `columns`

as arguments. As an example, let’s use this method to reshape `int_list_to_float_tensor`

.

reshaped_tensor = int_list_to_float_tensor.view(4, 1) print("Original Size of the tensor: ", reshaped_tensor) print("New size of the tensor: ", reshaped_tensor)

Original Size of the tensor: tensor([[10.], [11.], [12.], [13.]]) New size of the tensor: tensor([[10.], [11.], [12.], [13.]])

As you can see, the `view()`

method has changed the size of the tensor to `torch.Size([4, 1])`

, with 4 rows and 1 column.

While the number of elements in a tensor object should remain constant after `view()`

method is applied, you can use `-1`

(such as `reshaped_tensor`

) to reshape a dynamic-sized tensor.**.**view(-1, 1)

Pytorch also allows you to convert NumPy arrays to tensors. You can use `torch.from_numpy`

for this operation. Let’s take a NumPy array and apply the operation.

numpy_arr = np.array([10.0, 11.0, 12.0, 13.0]) from_numpy_to_tensor = torch.from_numpy(numpy_arr) print("dtype of the tensor: ", from_numpy_to_tensor.dtype) print("type of the tensor: ", from_numpy_to_tensor.type())

dtype of the tensor: torch.float64 type of the tensor: torch.DoubleTensor

Similarly, you can convert the tensor object back to a NumPy array. Let’s use the previous example to show how it’s done.

tensor_to_numpy = from_numpy_to_tensor.numpy() print("back to numpy from tensor: ", tensor_to_numpy) print("dtype of converted numpy array: ", tensor_to_numpy.dtype)

back to numpy from tensor: [10. 11. 12. 13.] dtype of converted numpy array: float64

You can also convert a pandas series to a tensor. For this, first you’ll need to store the pandas series with `values()`

function using a NumPy array.

pandas_series=pd.Series([1, 0.2, 3, 13.1]) store_with_numpy=torch.from_numpy(pandas_series.values) print("Stored tensor in numpy array: ", store_with_numpy) print("dtype of stored tensor: ", store_with_numpy.dtype) print("type of stored tensor: ", store_with_numpy.type())

Stored tensor in numpy array: tensor([ 1.0000, 0.2000, 3.0000, 13.1000], dtype=torch.float64) dtype of stored tensor: torch.float64 type of stored tensor: torch.DoubleTensor

Furthermore, the Pytorch framework allows us to do a lot with tensors such as its `item()`

method returns a python number from a tensor and `tolist()`

method returns a list.

new_tensor=torch.tensor([10, 11, 12, 13]) print("the second item is",new_tensor[1].item()) tensor_to_list=new_tensor.tolist() print('tensor:', new_tensor,"\nlist:",tensor_to_list)

the second item is 11 tensor: tensor([10, 11, 12, 13]) list: [10, 11, 12, 13]

Indexing and slicing operations are almost the same in Pytorch as python. Therefore, the first index always starts at 0 and the last index is less than the total length of the tensor. Use square brackets to access any number in a tensor.

tensor_index = torch.tensor([0, 1, 2, 3]) print("Check value at index 0:",tensor_index[0]) print("Check value at index 3:",tensor_index[3])

Check value at index 0: tensor(0) Check value at index 3: tensor(3)

Like a list in python, you can also perform slicing operations on the values in a tensor. Moreover, the Pytorch library allows you to change certain values in a tensor as well.

Let’s take an example to check how these operations can be applied.

example_tensor = torch.tensor([50, 11, 22, 33, 44]) sclicing_tensor = example_tensor[1:4] print("example tensor : ", example_tensor) print("subset of example tensor:", sclicing_tensor)

example tensor : tensor([50, 11, 22, 33, 44]) subset of example tensor: tensor([11, 22, 33])

Now, let’s change the value at index 3 of `example_tensor`

:

print("value at index 3 of example tensor:", example_tensor[3]) example_tensor[3] = 0 print("new tensor:", example_tensor)

value at index 3 of example tensor: tensor(0) new tensor: tensor([50, 11, 22, 0, 44])

In this section, we’ll review some statistical methods that can be applied on tensor objects.

These two useful methods are employed to find the minimum and maximum value in a tensor. Here is how they work.

We’ll use a `sample_tensor`

as an example to apply these methods.

sample_tensor = torch.tensor([5, 4, 3, 2, 1]) min_value = sample_tensor.min() max_value = sample_tensor.max() print("check minimum value in the tensor: ", min_value) print("check maximum value in the tensor: ", max_value)

check minimum value in the tensor: tensor(1) check maximum value in the tensor: tensor(5)

Mean and standard deviation are often used while doing statistical operations on tensors. You can apply these two metrics using `.mean()`

and `.std()`

functions in Pytorch.

Let’s use an example to see how these two metrics are calculated.

mean_std_tensor = torch.tensor([-1.0, 2.0, 1, -2]) Mean = mean_std_tensor.mean() print("mean of mean_std_tensor: ", Mean) std_dev = mean_std_tensor.std() print("standard deviation of mean_std_tensor: ", std_dev)

mean of mean_std_tensor: tensor(0.) standard deviation of mean_std_tensor: tensor(1.8257)

Addition and Multiplication operations can be easily applied on tensors in Pytorch. In this section, we’ll create two one-dimensional tensors to demonstrate how these operations can be used.

a = torch.tensor([1, 1]) b = torch.tensor([2, 2]) add = a + b multiply = a * b print("addition of two tensors: ", add) print("multiplication of two tensors: ", multiply)

addition of two tensors: tensor([3, 3]) multiplication of two tensors: tensor([2, 2])

For your convenience, below is all the examples above tying together so you can try them in one shot:

import torch import numpy as np import pandas as pd int_to_tensor = torch.tensor([10, 11, 12, 13]) print("Tensor object type after conversion: ", int_to_tensor.dtype) print("Tensor object type after conversion: ", int_to_tensor.type()) float_to_tensor = torch.tensor([10.0, 11.0, 12.0, 13.0]) print("Tensor object type after conversion: ", float_to_tensor.dtype) print("Tensor object type after conversion: ", float_to_tensor.type()) int_list_to_float_tensor = torch.FloatTensor([10, 11, 12, 13]) int_list_to_float_tensor.type() print("Tensor type after conversion: ", int_list_to_float_tensor.type()) print("Size of the int_list_to_float_tensor: ", int_list_to_float_tensor.size()) print("Dimensions of the int_list_to_float_tensor: ",int_list_to_float_tensor.ndimension()) reshaped_tensor = int_list_to_float_tensor.view(4, 1) print("Original Size of the tensor: ", reshaped_tensor) print("New size of the tensor: ", reshaped_tensor) numpy_arr = np.array([10.0, 11.0, 12.0, 13.0]) from_numpy_to_tensor = torch.from_numpy(numpy_arr) print("dtype of the tensor: ", from_numpy_to_tensor.dtype) print("type of the tensor: ", from_numpy_to_tensor.type()) tensor_to_numpy = from_numpy_to_tensor.numpy() print("back to numpy from tensor: ", tensor_to_numpy) print("dtype of converted numpy array: ", tensor_to_numpy.dtype) pandas_series=pd.Series([1, 0.2, 3, 13.1]) store_with_numpy=torch.from_numpy(pandas_series.values) print("Stored tensor in numpy array: ", store_with_numpy) print("dtype of stored tensor: ", store_with_numpy.dtype) print("type of stored tensor: ", store_with_numpy.type()) new_tensor=torch.tensor([10, 11, 12, 13]) print("the second item is",new_tensor[1].item()) tensor_to_list=new_tensor.tolist() print('tensor:', new_tensor,"\nlist:",tensor_to_list) tensor_index = torch.tensor([0, 1, 2, 3]) print("Check value at index 0:",tensor_index[0]) print("Check value at index 3:",tensor_index[3]) example_tensor = torch.tensor([50, 11, 22, 33, 44]) sclicing_tensor = example_tensor[1:4] print("example tensor : ", example_tensor) print("subset of example tensor:", sclicing_tensor) print("value at index 3 of example tensor:", example_tensor[3]) example_tensor[3] = 0 print("new tensor:", example_tensor) sample_tensor = torch.tensor([5, 4, 3, 2, 1]) min_value = sample_tensor.min() max_value = sample_tensor.max() print("check minimum value in the tensor: ", min_value) print("check maximum value in the tensor: ", max_value) mean_std_tensor = torch.tensor([-1.0, 2.0, 1, -2]) Mean = mean_std_tensor.mean() print("mean of mean_std_tensor: ", Mean) std_dev = mean_std_tensor.std() print("standard deviation of mean_std_tensor: ", std_dev) a = torch.tensor([1, 1]) b = torch.tensor([2, 2]) add = a + b multiply = a * b print("addition of two tensors: ", add) print("multiplication of two tensors: ", multiply)

Developed at the same time as TensorFlow, PyTorch used to have a simpler syntax until TensorFlow adopted Keras in its 2.x version. To learn the basics of PyTorch, you may want to read the PyTorch tutorials:

Especially the basics of PyTorch tensor can be found in the Tensor tutorial page:

There are also quite a few books on PyTorch that are suitable for beginners. A more recently published book should be recommended as the tools and syntax are actively evolving. One example is

- Deep Learning with PyTorch by Eli Stevens, Luca Antiga, and Thomas Viehmann, 2020.

https://www.manning.com/books/deep-learning-with-pytorch

In this tutorial, you’ve discovered how to use one-dimensional tensors in Pytorch.

Specifically, you learned:

- The basics of one-dimensional tensor operations in PyTorch
- About tensor types and shapes and how to perform tensor slicing and indexing operations
- How to apply some methods on tensor objects, such as mean, standard deviation, addition, and multiplication

The post One-Dimensional Tensors in Pytorch appeared first on Machine Learning Mastery.

]]>