Last Updated on December 6, 2019
In deep learning it is common to see a lot of discussion around tensors as the cornerstone data structure.
Tensor even appears in name of Google’s flagship machine learning library: “TensorFlow“.
Tensors are a type of data structure used in linear algebra, and like vectors and matrices, you can calculate arithmetic operations with tensors.
In this tutorial, you will discover what tensors are and how to manipulate them in Python with NumPy
After completing this tutorial, you will know:
- That tensors are a generalization of matrices and are represented using n-dimensional arrays.
- How to implement element-wise operations with tensors.
- How to perform the tensor product.
Kick-start your project with my new book Linear Algebra for Machine Learning, including step-by-step tutorials and the Python source code files for all examples.
Let’s get started.
- Update Oct/2019: Fixed typo in the names of array indexes (thanks Henry Chan).

A Gentle Introduction to Tensors for Machine Learning with NumPy
Photo by Daniel Lombraña González, some rights reserved.
Tutorial Overview
This tutorial is divided into 3 parts; they are:
- What are Tensors?
- Tensors in Python
- Element-Wise Tensor Operations
- Tensor Product
Need help with Linear Algebra for Machine Learning?
Take my free 7-day email crash course now (with sample code).
Click to sign-up and also get a free PDF Ebook version of the course.
What are Tensors?
A tensor is a generalization of vectors and matrices and is easily understood as a multidimensional array.
In the general case, an array of numbers arranged on a regular grid with a variable number of axes is known as a tensor.
— Page 33, Deep Learning, 2016.
A vector is a one-dimensional or first order tensor and a matrix is a two-dimensional or second order tensor.
Tensor notation is much like matrix notation with a capital letter representing a tensor and lowercase letters with subscript integers representing scalar values within the tensor.
1 2 3 |
t111, t121, t131 t112, t122, t132 t113, t123, t133 T = (t211, t221, t231), (t212, t222, t232), (t213, t223, t233) t311, t321, t331 t312, t322, t332 t313, t323, t333 |
Many of the operations that can be performed with scalars, vectors, and matrices can be reformulated to be performed with tensors.
As a tool, tensors and tensor algebra is widely used in the fields of physics and engineering. It is a term and set of techniques known in machine learning in the training and operation of deep learning models can be described in terms of tensors.
Tensors in Python
Like vectors and matrices, tensors can be represented in Python using the N-dimensional array (ndarray).
A tensor can be defined in-line to the constructor of array() as a list of lists.
The example below defines a 3x3x3 tensor as a NumPy ndarray. Three dimensions is easier to wrap your head around. Here, we first define rows, then a list of rows stacked as columns, then a list of columns stacked as levels in a cube.
1 2 3 4 5 6 7 8 9 |
# create tensor from numpy import array T = array([ [[1,2,3], [4,5,6], [7,8,9]], [[11,12,13], [14,15,16], [17,18,19]], [[21,22,23], [24,25,26], [27,28,29]], ]) print(T.shape) print(T) |
Running the example first prints the shape of the tensor, then the values of the tensor itself.
You can see that, at least in three-dimensions, the tensor is printed as a series of matrices, one for each layer. For this 3D tensor, axis 0 specifies the level, axis 1 specifies the row, and axis 2 specifies the column.
1 2 3 4 5 6 7 8 9 10 11 12 |
(3, 3, 3) [[[ 1 2 3] [ 4 5 6] [ 7 8 9]] [[11 12 13] [14 15 16] [17 18 19]] [[21 22 23] [24 25 26] [27 28 29]]] |
Element-Wise Tensor Operations
As with matrices, we can perform element-wise arithmetic between tensors.
In this section, we will work through the four main arithmetic operations.
Tensor Addition
The element-wise addition of two tensors with the same dimensions results in a new tensor with the same dimensions where each scalar value is the element-wise addition of the scalars in the parent tensors.
1 2 3 4 5 6 7 8 9 10 11 |
a111, a121, a131 a112, a122, a132 A = (a211, a221, a231), (a112, a122, a132) b111, b121, b131 b112, b122, b132 B = (b211, b221, b231), (b112, b122, b132) C = A + B a111 + b111, a121 + b121, a131 + b131 a112 + b112, a122 + b122, a132 + b132 C = (a211 + b211, a221 + b221, a231 + b231), (a112 + b112, a122 + b122, a132 + b132) |
In NumPy, we can add tensors directly by adding arrays.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
# tensor addition from numpy import array A = array([ [[1,2,3], [4,5,6], [7,8,9]], [[11,12,13], [14,15,16], [17,18,19]], [[21,22,23], [24,25,26], [27,28,29]], ]) B = array([ [[1,2,3], [4,5,6], [7,8,9]], [[11,12,13], [14,15,16], [17,18,19]], [[21,22,23], [24,25,26], [27,28,29]], ]) C = A + B print(C) |
Running the example prints the addition of the two parent tensors.
1 2 3 4 5 6 7 8 9 10 11 |
[[[ 2 4 6] [ 8 10 12] [14 16 18]] [[22 24 26] [28 30 32] [34 36 38]] [[42 44 46] [48 50 52] [54 56 58]]] |
Tensor Subtraction
The element-wise subtraction of one tensor from another tensor with the same dimensions results in a new tensor with the same dimensions where each scalar value is the element-wise subtraction of the scalars in the parent tensors.
1 2 3 4 5 6 7 8 9 10 |
a111, a121, a131 a112, a122, a132 A = (a211, a221, a231), (a112, a122, a132) b111, b121, b131 b112, b122, b132 B = (b211, b221, b231), (b112, b122, b132) C = A - B a111 - b111, a121 - b121, a131 - b131 a112 - b112, a122 - b122, a132 - b132 C = (a211 - b211, a221 - b221, a231 - b231), (a112 - b112, a122 - b122, a132 - b132) |
In NumPy, we can subtract tensors directly by subtracting arrays.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
# tensor subtraction from numpy import array A = array([ [[1,2,3], [4,5,6], [7,8,9]], [[11,12,13], [14,15,16], [17,18,19]], [[21,22,23], [24,25,26], [27,28,29]], ]) B = array([ [[1,2,3], [4,5,6], [7,8,9]], [[11,12,13], [14,15,16], [17,18,19]], [[21,22,23], [24,25,26], [27,28,29]], ]) C = A - B print(C) |
Running the example prints the result of subtracting the first tensor from the second.
1 2 3 4 5 6 7 8 9 10 11 |
[[[0 0 0] [0 0 0] [0 0 0]] [[0 0 0] [0 0 0] [0 0 0]] [[0 0 0] [0 0 0] [0 0 0]]] |
Tensor Hadamard Product
The element-wise multiplication of one tensor from another tensor with the same dimensions results in a new tensor with the same dimensions where each scalar value is the element-wise multiplication of the scalars in the parent tensors.
As with matrices, the operation is referred to as the Hadamard Product to differentiate it from tensor multiplication. Here, we will use the “o” operator to indicate the Hadamard product operation between tensors.
1 2 3 4 5 6 7 8 9 10 |
a111, a121, a131 a112, a122, a132 A = (a211, a221, a231), (a112, a122, a132) b111, b121, b131 b112, b122, b132 B = (b211, b221, b231), (b112, b122, b132) C = A o B a111 * b111, a121 * b121, a131 * b131 a112 * b112, a122 * b122, a132 * b132 C = (a211 * b211, a221 * b221, a231 * b231), (a112 * b112, a122 * b122, a132 * b132) |
In NumPy, we can multiply tensors directly by multiplying arrays.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
# tensor Hadamard product from numpy import array A = array([ [[1,2,3], [4,5,6], [7,8,9]], [[11,12,13], [14,15,16], [17,18,19]], [[21,22,23], [24,25,26], [27,28,29]], ]) B = array([ [[1,2,3], [4,5,6], [7,8,9]], [[11,12,13], [14,15,16], [17,18,19]], [[21,22,23], [24,25,26], [27,28,29]], ]) C = A * B print(C) |
Running the example prints the result of multiplying the tensors.
1 2 3 4 5 6 7 8 9 10 11 |
[[[ 1 4 9] [ 16 25 36] [ 49 64 81]] [[121 144 169] [196 225 256] [289 324 361]] [[441 484 529] [576 625 676] [729 784 841]]] |
Tensor Division
The element-wise division of one tensor from another tensor with the same dimensions results in a new tensor with the same dimensions where each scalar value is the element-wise division of the scalars in the parent tensors.
1 2 3 4 5 6 7 8 9 10 |
a111, a121, a131 a112, a122, a132 A = (a211, a221, a231), (a112, a122, a132) b111, b121, b131 b112, b122, b132 B = (b211, b221, b231), (b112, b122, b132) C = A / B a111 / b111, a121 / b121, a131 / b131 a112 / b112, a122 / b122, a132 / b132 C = (a211 / b211, a221 / b221, a231 / b231), (a112 / b112, a122 / b122, a132 / b132) |
In NumPy, we can divide tensors directly by dividing arrays.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
# tensor division from numpy import array A = array([ [[1,2,3], [4,5,6], [7,8,9]], [[11,12,13], [14,15,16], [17,18,19]], [[21,22,23], [24,25,26], [27,28,29]], ]) B = array([ [[1,2,3], [4,5,6], [7,8,9]], [[11,12,13], [14,15,16], [17,18,19]], [[21,22,23], [24,25,26], [27,28,29]], ]) C = A / B print(C) |
Running the example prints the result of dividing the tensors.
1 2 3 4 5 6 7 8 9 10 11 |
[[[ 1. 1. 1.] [ 1. 1. 1.] [ 1. 1. 1.]] [[ 1. 1. 1.] [ 1. 1. 1.] [ 1. 1. 1.]] [[ 1. 1. 1.] [ 1. 1. 1.] [ 1. 1. 1.]]] |
Tensor Product
The tensor product operator is often denoted as a circle with a small x in the middle. We will denote it here as “(x)”.
Given a tensor A with q dimensions and tensor B with r dimensions, the product of these tensors will be a new tensor with the order of q + r or, said another way, q + r dimensions.
The tensor product is not limited to tensors, but can also be performed on matrices and vectors, which can be a good place to practice in order to develop the intuition for higher dimensions.
Let’s take a look at the tensor product for vectors.
1 2 3 4 5 6 7 8 |
a = (a1, a2) b = (b1, b2) c = a (x) b a1 * [b1, b2] c = (a2 * [b1, b2]) |
Or, unrolled:
1 2 |
a1 * b1, a1 * b2 c = (a2 * b1, a2 * b2) |
Let’s take a look at the tensor product for matrices.
1 2 3 4 5 6 7 8 9 10 11 12 |
a11, a12 A = (a21, a22) b11, b12 B = (b21, b22) C = A (x) B b11, b12 b11, b12 a11 * (b21, b22), a12 * (b21, b22) C = [ b11, b12 b11, b12 ] a21 * (b21, b22), a22 * (b21, b22) |
Or, unrolled:
1 2 3 4 |
a11 * b11, a11 * b12, a12 * b11, a12 * b12 a11 * b21, a11 * b22, a12 * b21, a12 * b22 C = (a21 * b11, a21 * b12, a22 * b11, a22 * b12) a21 * b21, a21 * b22, a22 * b21, a22 * b22 |
The tensor product can be implemented in NumPy using the tensordot() function.
The function takes as arguments the two tensors to be multiplied and the axis on which to sum the products over, called the sum reduction. To calculate the tensor product, also called the tensor dot product in NumPy, the axis must be set to 0.
In the example below, we define two order-1 tensors (vectors) with and calculate the tensor product.
1 2 3 4 5 6 7 |
# tensor product from numpy import array from numpy import tensordot A = array([1,2]) B = array([3,4]) C = tensordot(A, B, axes=0) print(C) |
Running the example prints the result of the tensor product.
The result is an order-2 tensor (matrix) with the lengths 2×2.
1 2 |
[[3 4] [6 8]] |
The tensor product is the most common form of tensor multiplication that you may encounter, but there are many other types of tensor multiplications that exist, such as the tensor dot product and the tensor contraction.
Extensions
This section lists some ideas for extending the tutorial that you may wish to explore.
- Update each example using your own small contrived tensor data.
- Implement three other types of tensor multiplication not covered in this tutorial with small vector or matrix data.
- Write your own functions to implement each tensor operation.
If you explore any of these extensions, I’d love to know.
Further Reading
This section provides more resources on the topic if you are looking to go deeper.
Books
- A Student’s Guide to Vectors and Tensors, 2011.
- Chapter 12, Special Topics, Matrix Computations, 2012.
- Tensor Algebra and Tensor Analysis for Engineers, 2015.
API
Articles
- Tensor algebra on Wikipedia
- Tensor on Wikipedia
- Tensor product on Wikipedia
- Outer product on Wikipedia
Other
- Fundamental Tensor Operations for Large-Scale Data Analysis in Tensor Train Formats, 2016.
- Tensor product, direct sum, Quantum Mechanics I, 2006.
- Tensorphobia and the Outer Product, 2016.
- The Tensor Product, 2011
Summary
In this tutorial, you discovered what tensors are and how to manipulate them in Python with NumPy.
Specifically, you learned:
- That tensors are a generalization of matrices and are represented using n-dimensional arrays.
- How to implement element-wise operations with tensors.
- How to perform the tensor product.
Do you have any questions?
Ask your questions in the comments below and I will do my best to answer.
Hi Jason!
Very nice, simple and well detailed introduction to one of the key mathematical tools for deep learning. I think any amateur in tensor could easily take over from here.
Thanks.
Hello Jason
This is a fantastic introduction to tensors. Very quick read-through for beginners like me. Very Helpful.
I’m glad it helped.
Thank you for your blog, which is very helpful. But I have a general question. Why do we need tensors in deep learning. Why not just use Numpy arrays?
You can develop your own library using numpy arrays.
Tensors are simply a generalisation of matrices.
“Given a tensor A with q dimensions and tensor B with r dimensions, the product of these tensors will be a new tensor with the order of q + r or, said another way, q + r dimensions.”
Maybe… q*r instead?
Good tutorial, with a very clear definition. I think the tensor dot product is probably the most tricky of the operators as you provide a few examples for low dimensions but don’t really provide the general formula for order n by order m. I think it would also be helpful to relate what tensor are used for when representing concepts for deep learning.
Thanks Victor.
Thank you, well-summarized!
I’m glad it helped.
Nice stuff but I wish you had decompositions and other things as well. Thanks, it is well-written.
What do you mean by decompositions?
Do you mean matrix factorization?
https://machinelearningmastery.com/introduction-to-matrix-decompositions-for-machine-learning/
this is totally different from matrix multiplication. in matrix dimenion is definedas A mxn where the matrix A has dimension m rows and n columns.
Hi Jason!
I have one question about tensor conversion.İ am using attention mechanism,and I must do my operations in for loop so that i store my results in a list.At the end,i cannot convert the list into a tensor in order to make the results connected with dense layers.Can u suggest anything to come over this problem?
A list or a numpy array can represent a tensor.
I think you might mean a Tensor data type for a given library? Perhaps check the library API on how to convert lists and arrays to that type?
Very nice tutorial.
I am no expert in math, but isn’t vector is a special type of tensor not the other way around ?
Thanks.
Not really, but it could be framed that way.
Well explained. And easy to understand!
Thanks, I’m glad it helped.
Hi Jason,
You said that “For this 3D tensor, axis 0 specifies the level, axis 1 specifies the column, and axis 2 specifies the row.”
But I think I should be:
For this 3D tensor, axis 0 specifies the level, axis 1 specifies the row, and axis 2 specifies the column.
A = array([
[[1,2,3], [4,5,6], [7,8,9]],
[[11,12,13], [14,15,16], [17,18,19]],
[[21,22,23], [24,25,26], [27,28,29]]
])
Wint zero index, we will have:
print(A[0,0,0]) –> 1: Level 0, Row 0, Column 0
print(A[0,0,1]) –> 2: Level 0, Row 0, Column 1
print(A[0,1,0]) –> 4: Level 0, Row 2, Column 0
Correct me if I wrong.
Thanks
Thanks.
Also try this:
Which prints:
[1 2 3]
[1 4 7]
In all the addition, subtraction, product, and division examples, I see this:
b111, b121, t131
B = (b211, t221, t231)
Should the “t” be “b”? I am totally new in tensor and this is the first time I am learning it.
Looks like a typo, thanks.
Fixed.
Thanks for this. I’m still confused, as other explanations mention that tensors have extra properties that are not captured by the idea that it’s just a generalization of matrices:
“But [the generalized matrix] description misses the most important property of a tensor!
A tensor is a mathematical entity that lives in a structure and interacts with other mathematical entities. If one transforms the other entities in the structure in a regular way, then the tensor must obey a related transformation rule.”
https://medium.com/@quantumsteinke/whats-the-difference-between-a-matrix-and-a-tensor-4505fbdc576c
Not sure about that…
Perhaps talk to the author about their ideas?
It seems computer scientists have borrowed this term from physicists / mathematicians and redefined it to mean a “multidimensional array”. Jason Brownlee points this out by even quoting from the “Deep Learning” book. But your confusion is warranted because this is not the definition that physicists use.
Physicists use the term tensor to mean a geometric object that remains invariant (i.e., it retains properties like length, direction, etc) when a coordinate system changes).
It can be helpful to understand what is NOT a tensor. Suppose we focus on a single component in a vector. This component (a rank 0 tensor) will change when the underlying coordinate system changes. So a single component cannot be a tensor, even though it satisfies the definition of a multidimensional array.
For an understanding of tensors, I would suggest checking out eigenchris videos: https://www.youtube.com/watch?v=8ptMTLzV4-I&t=321s
Great note, thanks!
Sir how to do that sum using for loop.Please explain?
What sum? What loop
Thanks Jason! Very interesting. This tutorial helped me to understand the concepts. Very straightforward, great use of codes and charts. Well done!
THanks!
Useful article, but it doesn’t describe what tensors represent in the machine learning domain. Do they represent training data, model itself, both, and / or other?
Thanks.
They can be used to represent data or model coefficients, e.g. weights in a neural net.
tensor product, use this ⊗
Thanks.
n the example below, we define two order-1 tensors (vectors) with and calculate the tensor product.
can you please explain how ” -1 ” came here ?
Read it as “order-one”, not negative one. E.g. one dimensional.
Why tensors?
Because it is the way we can hold an array of numbers together.
Thanks a million for this tutorial.
Recently I’m working on a problem which each it’s sample is a matrix, for example 10*10 (so the data set is a tensor with dimension of 10*10*1000)
I want to classify this data set. but the classification is not discrete.
So it may have to be a regression problem. I’m not sure, and I want to predict a matrix( the output of the classifier must be a matrix)
Is there any source ,any book and etc that could help me to solve this problem?
your guidance on this matter would be appreciated
thanks a lot.
Hello Roz…while I cannot speak directly to your application, you may find the following of interest:
https://machinelearningmastery.com/softmax-activation-function-with-python/