Last Updated on August 9, 2019

Matrix decompositions are a useful tool for reducing a matrix to their constituent parts in order to simplify a range of more complex operations.

Perhaps the most used type of matrix decomposition is the eigendecomposition that decomposes a matrix into eigenvectors and eigenvalues. This decomposition also plays a role in methods used in machine learning, such as in the the Principal Component Analysis method or PCA.

In this tutorial, you will discover the eigendecomposition, eigenvectors, and eigenvalues in linear algebra.

After completing this tutorial, you will know:

- What an eigendecomposition is and the role of eigenvectors and eigenvalues.
- How to calculate an eigendecomposition in Python with NumPy.
- How to confirm a vector is an eigenvector and how to reconstruct a matrix from eigenvectors and eigenvalues.

**Kick-start your project** with my new book Linear Algebra for Machine Learning, including *step-by-step tutorials* and the *Python source code* files for all examples.

Let’s get started.

## Tutorial Overview

This tutorial is divided into 5 parts; they are:

- Eigendecomposition of a Matrix
- Eigenvectors and Eigenvalues
- Calculation of Eigendecomposition
- Confirm an Eigenvector and Eigenvalue
- Reconstruct Original Matrix

### Need help with Linear Algebra for Machine Learning?

Take my free 7-day email crash course now (with sample code).

Click to sign-up and also get a free PDF Ebook version of the course.

## Eigendecomposition of a Matrix

Eigendecomposition of a matrix is a type of decomposition that involves decomposing a square matrix into a set of eigenvectors and eigenvalues.

One of the most widely used kinds of matrix decomposition is called eigendecomposition, in which we decompose a matrix into a set of eigenvectors and eigenvalues.

— Page 42, Deep Learning, 2016.

A vector is an eigenvector of a matrix if it satisfies the following equation.

1 |
A . v = lambda . v |

This is called the eigenvalue equation, where A is the parent square matrix that we are decomposing, v is the eigenvector of the matrix, and lambda is the lowercase Greek letter and represents the eigenvalue scalar.

Or without the dot notation.

1 |
Av = lambdav |

A matrix could have one eigenvector and eigenvalue for each dimension of the parent matrix. Not all square matrices can be decomposed into eigenvectors and eigenvalues, and some can only be decomposed in a way that requires complex numbers. The parent matrix can be shown to be a product of the eigenvectors and eigenvalues.

1 |
A = Q . diag(V) . Q^-1 |

Or, without the dot notation.

1 |
A = Qdiag(V)Q^-1 |

Where Q is a matrix comprised of the eigenvectors, diag(V) is a diagonal matrix comprised of the eigenvalues along the diagonal (sometimes represented with a capital lambda), and Q^-1 is the inverse of the matrix comprised of the eigenvectors.

However, we often want to decompose matrices into their eigenvalues and eigenvectors. Doing so can help us to analyze certain properties of the matrix, much as decomposing an integer into its prime factors can help us understand the behavior of that integer.

— Page 43, Deep Learning, 2016.

Eigen is not a name, e.g. the method is not named after “Eigen”; eigen (pronounced eye-gan) is a German word that means “own” or “innate”, as in belonging to the parent matrix.

A decomposition operation does not result in a compression of the matrix; instead, it breaks it down into constituent parts to make certain operations on the matrix easier to perform. Like other matrix decomposition methods, Eigendecomposition is used as an element to simplify the calculation of other more complex matrix operations.

Almost all vectors change direction, when they are multiplied by A. Certain exceptional vectors x are in the same direction as Ax. Those are the “eigenvectors”. Multiply an eigenvector by A, and the vector Ax is the number lambda times the original x. […] The eigenvalue lambda tells whether the special vector x is stretched or shrunk or reversed or left unchanged – when it is multiplied by A.

— Page 289, Introduction to Linear Algebra, Fifth Edition, 2016.

Eigendecomposition can also be used to calculate the principal components of a matrix in the Principal Component Analysis method or PCA that can be used to reduce the dimensionality of data in machine learning.

## Eigenvectors and Eigenvalues

Eigenvectors are unit vectors, which means that their length or magnitude is equal to 1.0. They are often referred as right vectors, which simply means a column vector (as opposed to a row vector or a left vector). A right-vector is a vector as we understand them.

Eigenvalues are coefficients applied to eigenvectors that give the vectors their length or magnitude. For example, a negative eigenvalue may reverse the direction of the eigenvector as part of scaling it.

A matrix that has only positive eigenvalues is referred to as a positive definite matrix, whereas if the eigenvalues are all negative, it is referred to as a negative definite matrix.

Decomposing a matrix in terms of its eigenvalues and its eigenvectors gives valuable insights into the properties of the matrix. Certain matrix calculations, like computing the power of the matrix, become much easier when we use the eigendecomposition of the matrix.

— Page 262, No Bullshit Guide To Linear Algebra, 2017

## Calculation of Eigendecomposition

An eigendecomposition is calculated on a square matrix using an efficient iterative algorithm, of which we will not go into the details.

Often an eigenvalue is found first, then an eigenvector is found to solve the equation as a set of coefficients.

The eigendecomposition can be calculated in NumPy using the eig() function.

The example below first defines a 3×3 square matrix. The eigendecomposition is calculated on the matrix returning the eigenvalues and eigenvectors.

1 2 3 4 5 6 7 8 9 10 |
# eigendecomposition from numpy import array from numpy.linalg import eig # define matrix A = array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) print(A) # calculate eigendecomposition values, vectors = eig(A) print(values) print(vectors) |

Running the example first prints the defined matrix, followed by the eigenvalues and the eigenvectors. More specifically, the eigenvectors are the right-hand side eigenvectors and are normalized to unit length.

1 2 3 4 5 6 7 8 9 |
[[1 2 3] [4 5 6] [7 8 9]] [ 1.61168440e+01 -1.11684397e+00 -9.75918483e-16] [[-0.23197069 -0.78583024 0.40824829] [-0.52532209 -0.08675134 -0.81649658] [-0.8186735 0.61232756 0.40824829]] |

## Confirm an Eigenvector and Eigenvalue

We can confirm that a vector is indeed an eigenvector of a matrix.

We do this by multiplying the candidate eigenvector by the eigenvector and comparing the result with the eigenvalue.

First, we will define a matrix, then calculate the eigenvalues and eigenvectors. We will then test whether the first vector and value are in fact an eigenvalue and eigenvector for the matrix. We know they are, but it is a good exercise.

The eigenvectors are returned as a matrix with the same dimensions as the parent matrix, where each column is an eigenvector, e.g. the first eigenvector is vectors[:, 0]. Eigenvalues are returned as a list, where value indices in the returned array are paired with eigenvectors by column index, e.g. the first eigenvalue at values[0] is paired with the first eigenvector at vectors[:, 0].

1 2 3 4 5 6 7 8 9 10 11 12 |
# confirm eigenvector from numpy import array from numpy.linalg import eig # define matrix A = array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) # calculate eigendecomposition values, vectors = eig(A) # confirm first eigenvector B = A.dot(vectors[:, 0]) print(B) C = vectors[:, 0] * values[0] print(C) |

The example multiplies the original matrix with the first eigenvector and compares it to the first eigenvector multiplied by the first eigenvalue.

Running the example prints the results of these two multiplications that show the same resulting vector, as we would expect.

1 2 3 |
[ -3.73863537 -8.46653421 -13.19443305] [ -3.73863537 -8.46653421 -13.19443305] |

## Reconstruct Original Matrix

We can reverse the process and reconstruct the original matrix given only the eigenvectors and eigenvalues.

First, the list of eigenvectors must be converted into a matrix, where each vector becomes a row. The eigenvalues need to be arranged into a diagonal matrix. The NumPy diag() function can be used for this.

Next, we need to calculate the inverse of the eigenvector matrix, which we can achieve with the inv() NumPy function. Finally, these elements need to be multiplied together with the dot() function.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
# reconstruct matrix from numpy import diag from numpy import dot from numpy.linalg import inv from numpy import array from numpy.linalg import eig # define matrix A = array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) print(A) # calculate eigenvectors and eigenvalues values, vectors = eig(A) # create matrix from eigenvectors Q = vectors # create inverse of eigenvectors matrix R = inv(Q) # create diagonal matrix from eigenvalues L = diag(values) # reconstruct the original matrix B = Q.dot(L).dot(R) print(B) |

The example calculates the eigenvalues and eigenvectors again and uses them to reconstruct the original matrix.

Running the example first prints the original matrix, then the matrix reconstructed from eigenvalues and eigenvectors matching the original matrix.

1 2 3 4 5 6 7 |
[[1 2 3] [4 5 6] [7 8 9]] [[ 1. 2. 3.] [ 4. 5. 6.] [ 7. 8. 9.]] |

## Extensions

This section lists some ideas for extending the tutorial that you may wish to explore.

- Create 5 examples using each operation with your own data.
- Implement each matrix operation manually for matrices defined as lists of lists.
- Search machine learning papers and find 1 example of each operation being used.

If you explore any of these extensions, I’d love to know.

## Further Reading

This section provides more resources on the topic if you are looking to go deeper.

### Books

- Section 6.1 Eigenvalues and eigenvectors. No Bullshit Guide To Linear Algebra, 2017.
- Chapter 6 Eigenvalues and Eigenvectors, Introduction to Linear Algebra, Fifth Edition, 2016.
- Section 2.7 Eigendecomposition, Deep Learning, 2016.
- Chapter 5 Eigenvalues, Eigenvectors, and Invariant Subspaces, Linear Algebra Done Right, Third Edition, 2015.
- Lecture 24, Eigenvalue Problems, Numerical Linear Algebra, 1997.

### API

### Articles

- eigen on Wiktionary
- Eigenvalues and eigenvectors
- Eigendecomposition of a matrix
- Eigenvalue algorithm
- Matrix decomposition

## Summary

In this tutorial, you discovered the eigendecomposition, eigenvectors, and eigenvalues in linear algebra.

Specifically, you learned:

- What an eigendecomposition is and the role of eigenvectors and eigenvalues.
- How to calculate an eigendecomposition in Python with NumPy.
- How to confirm a vector is an eigenvector and how to reconstruct a matrix from eigenvectors and eigenvalues.

Do you have any questions?

Ask your questions in the comments below and I will do my best to answer.

A visual understanding of eigenvectors, eigenvalues, and the usefulness of an eigenbasis.

on 3Blue1Brown’s youtube channel https://www.youtube.com/watch?v=PFDu9oVAE-g is very helpful in visualizing all this.

Thanks for sharing Harvey!

Hi Jason,

I’ve got a minor criticism: things would be made much clearer much faster with some good images, particularly when describing matrices and their orientation. For instance (and besides the point that I figured it out eventually) your definition of right & left vectors is confused by the additional statement “A right-vector is a vector as we understand them,” because not everyone learns of “vectors” meaning a column vector. the problem is confounded by the fact that a search for “definition of a right vector” brings up just about everything but a definition. Even “Linear Algebra” by David Lay has no mention of right vectors in the index or (as far as I can tell) the text [I looked]. This is confounding for someone who’s unclear about some detail but cannot ask someone else directly for whatever reason. Simple images, go a long way to making things clearer. To be fair, even NIST’s online stats reference lacks drawings or images , and I think there’s a error in their text where they confuse rows and columns – in such a case, an image would bring clarity.

I appreciate the effort you went to to publish this material and your other articles (like PCA which led me here in the first place.)

Great suggestion, thanks.

I guess I’m not a visual person – note the lack of images across the 900+ tutorials on this site.

I think in code. Perhaps I need to work on it. Or perhaps this site is for people like me.

Ken is wight, this topic deserves visuals.

Thanks.

page 117, book linear algebra for machine learning

There is an error in your book, when you write that the eigendescomposition of the parent matrix is the formula 15.3. The third element it’s not the transpose of the matrix but the inverse of the matrix. It’s ok in this blog.

The book is continuously updated to fix issues.

I have resent your purchase receipt with an updated download link to the latest version of the book.

Hi Jason,

interesting and fundamental tutorial to understand eigenvalues and eigenvectors associated to a Matrix (e.g. associated to samples-features tabular dataset operation). I follow this tutorial after the one from you, more oriented to PCA analysis “How to Calculate Principal Component Analysis (PCA) from Scratch in Python”.

In order to connect Dataset Reduction (e.g. columns features reduction) with this eigenvalues and eigenvector “linear algebra” tool, if you prepare your matrix “A”, pre-performing centered and covariance matrix (comparison of columns or features) as you do in the previous tutorial referred), you are now able to reduce de dataset via eigenvalues and eigenvectors applications…

Anyway I miss the interpretation of eigenvalues, in the previous tutorials, in the sense of representing a criterium for dataset (samples-features) problem reduction. That is to say, connecting the minimum eigenvalues to features neglecting in front of largest eigenvalues) …so finally we can cancel the axis or eigenvectors or features associated to this neglected axis (no variations occurs inside them!) ….so we got we are looking for… to simplify the dataset o cancel redundant dataset features of not significant features int the Machine Learning problem…thanks to eigenvalues/eigenvectors analysis …

I will follow your other tutorials related to features selection, reductions an importance evaluation … to understand better the way it works !

Thank you for all of these marvelous tutorials Jason!

Thanks, I’m happy it helps.