An important machine learning method for dimensionality reduction is called Principal Component Analysis.
It is a method that uses simple matrix operations from linear algebra and statistics to calculate a projection of the original data into the same number or fewer dimensions.
In this tutorial, you will discover the Principal Component Analysis machine learning method for dimensionality reduction and how to implement it from scratch in Python.
After completing this tutorial, you will know:
- The procedure for calculating the Principal Component Analysis and how to choose principal components.
- How to calculate the Principal Component Analysis from scratch in NumPy.
- How to calculate the Principal Component Analysis for reuse on more data in scikit-learn.
Kick-start your project with my new book Linear Algebra for Machine Learning, including step-by-step tutorials and the Python source code files for all examples.
Let’s get started.
- Update Apr/2018: Fixed typo in the explaination of the sklearn PCA attributes. Thanks kris.

How to Calculate the Principal Component Analysis from Scratch in Python
Photo by mickey, some rights reserved.
Tutorial Overview
This tutorial is divided into 3 parts; they are:
- Principal Component Analysis
- Manually Calculate Principal Component Analysis
- Reusable Principal Component Analysis
Need help with Linear Algebra for Machine Learning?
Take my free 7-day email crash course now (with sample code).
Click to sign-up and also get a free PDF Ebook version of the course.
Principal Component Analysis
Principal Component Analysis, or PCA for short, is a method for reducing the dimensionality of data.
It can be thought of as a projection method where data with m-columns (features) is projected into a subspace with m or fewer columns, whilst retaining the essence of the original data.
The PCA method can be described and implemented using the tools of linear algebra.
PCA is an operation applied to a dataset, represented by an n x m matrix A that results in a projection of A which we will call B. Let’s walk through the steps of this operation.
1 2 3 4 5 |
a11, a12 A = (a21, a22) a31, a32 B = PCA(A) |
The first step is to calculate the mean values of each column.
1 |
M = mean(A) |
or
1 2 |
(a11 + a21 + a31) / 3 M(m11, m12) = (a12 + a22 + a32) / 3 |
Next, we need to center the values in each column by subtracting the mean column value.
1 |
C = A - M |
The next step is to calculate the covariance matrix of the centered matrix C.
Correlation is a normalized measure of the amount and direction (positive or negative) that two columns change together. Covariance is a generalized and unnormalized version of correlation across multiple columns. A covariance matrix is a calculation of covariance of a given matrix with covariance scores for every column with every other column, including itself.
1 |
V = cov(C) |
Finally, we calculate the eigendecomposition of the covariance matrix V. This results in a list of eigenvalues and a list of eigenvectors.
1 |
values, vectors = eig(V) |
The eigenvectors represent the directions or components for the reduced subspace of B, whereas the eigenvalues represent the magnitudes for the directions. For more on this topic, see the post:
The eigenvectors can be sorted by the eigenvalues in descending order to provide a ranking of the components or axes of the new subspace for A.
If all eigenvalues have a similar value, then we know that the existing representation may already be reasonably compressed or dense and that the projection may offer little. If there are eigenvalues close to zero, they represent components or axes of B that may be discarded.
A total of m or less components must be selected to comprise the chosen subspace. Ideally, we would select k eigenvectors, called principal components, that have the k largest eigenvalues.
1 |
B = select(values, vectors) |
Other matrix decomposition methods can be used such as Singular-Value Decomposition, or SVD. As such, generally the values are referred to as singular values and the vectors of the subspace are referred to as principal components.
Once chosen, data can be projected into the subspace via matrix multiplication.
1 |
P = B^T . A |
Where A is the original data that we wish to project, B^T is the transpose of the chosen principal components and P is the projection of A.
This is called the covariance method for calculating the PCA, although there are alternative ways to to calculate it.
Manually Calculate Principal Component Analysis
There is no pca() function in NumPy, but we can easily calculate the Principal Component Analysis step-by-step using NumPy functions.
The example below defines a small 3×2 matrix, centers the data in the matrix, calculates the covariance matrix of the centered data, and then the eigendecomposition of the covariance matrix. The eigenvectors and eigenvalues are taken as the principal components and singular values and used to project the original data.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 |
from numpy import array from numpy import mean from numpy import cov from numpy.linalg import eig # define a matrix A = array([[1, 2], [3, 4], [5, 6]]) print(A) # calculate the mean of each column M = mean(A.T, axis=1) print(M) # center columns by subtracting column means C = A - M print(C) # calculate covariance matrix of centered matrix V = cov(C.T) print(V) # eigendecomposition of covariance matrix values, vectors = eig(V) print(vectors) print(values) # project data P = vectors.T.dot(C.T) print(P.T) |
Running the example first prints the original matrix, then the eigenvectors and eigenvalues of the centered covariance matrix, followed finally by the projection of the original matrix.
Interestingly, we can see that only the first eigenvector is required, suggesting that we could project our 3×2 matrix onto a 3×1 matrix with little loss.
1 2 3 4 5 6 7 8 9 10 11 12 |
[[1 2] [3 4] [5 6]] [[ 0.70710678 -0.70710678] [ 0.70710678 0.70710678]] [ 8. 0.] [[-2.82842712 0. ] [ 0. 0. ] [ 2.82842712 0. ]] |
Reusable Principal Component Analysis
We can calculate a Principal Component Analysis on a dataset using the PCA() class in the scikit-learn library. The benefit of this approach is that once the projection is calculated, it can be applied to new data again and again quite easily.
When creating the class, the number of components can be specified as a parameter.
The class is first fit on a dataset by calling the fit() function, and then the original dataset or other data can be projected into a subspace with the chosen number of dimensions by calling the transform() function.
Once fit, the eigenvalues and principal components can be accessed on the PCA class via the explained_variance_ and components_ attributes.
The example below demonstrates using this class by first creating an instance, fitting it on a 3×2 matrix, accessing the values and vectors of the projection, and transforming the original data.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
# Principal Component Analysis from numpy import array from sklearn.decomposition import PCA # define a matrix A = array([[1, 2], [3, 4], [5, 6]]) print(A) # create the PCA instance pca = PCA(2) # fit on data pca.fit(A) # access values and vectors print(pca.components_) print(pca.explained_variance_) # transform data B = pca.transform(A) print(B) |
Running the example first prints the 3×2 data matrix, then the principal components and values, followed by the projection of the original matrix.
We can see, that with some very minor floating point rounding that we achieve the same principal components, singular values, and projection as in the previous example.
1 2 3 4 5 6 7 8 9 10 11 12 |
[[1 2] [3 4] [5 6]] [[ 0.70710678 0.70710678] [ 0.70710678 -0.70710678]] [ 8.00000000e+00 2.25080839e-33] [[ -2.82842712e+00 2.22044605e-16] [ 0.00000000e+00 0.00000000e+00] [ 2.82842712e+00 -2.22044605e-16]] |
Extensions
This section lists some ideas for extending the tutorial that you may wish to explore.
- Re-run the examples with your own small contrived matrix values.
- Load a dataset and calculate the PCA on it and compare the results from the two methods.
- Search for and locate 10 examples where PCA has been used in machine learning papers.
If you explore any of these extensions, I’d love to know.
Further Reading
This section provides more resources on the topic if you are looking to go deeper.
Books
- Section 7.3 Principal Component Analysis (PCA by the SVD), Introduction to Linear Algebra, Fifth Edition, 2016.
- Section 2.12 Example: Principal Components Analysis, Deep Learning, 2016.
API
Articles
Tutorials
- Principal Component Analysis with numpy, 2011.
- PCA and image compression with numpy, 2011.
- Implementing a Principal Component Analysis (PCA), 2014.
Summary
In this tutorial, you discovered the Principal Component Analysis machine learning method for dimensionality reduction.
Specifically, you learned:
- The procedure for calculating the Principal Component Analysis and how to choose principal components.
- How to calculate the Principal Component Analysis from scratch in NumPy.
- How to calculate the Principal Component Analysis for reuse on more data in scikit-learn.
Do you have any questions?
Ask your questions in the comments below and I will do my best to answer.
Great article! I have been more of an R programmer in the past but have started to mess with Python. Python is a very versatile language and has started to draw my attention over the last few months.
Thanks John. I’m a big fan of Python myself these days.
Hi Jason,
This was fantastic explanation, thank you!
You’re welcome!
Hello Jason, it’s very nice you are doing great work and I request you to make such a post on ISOMAP Dimensionality Reduction too..
Thanks for the suggestion.
Hello
Could you make a post on the Scree plot ?
Thank you
Thanks for the suggestion John.
Is there any direct relation between SVD and PCA since both perform dimentionality reduction?
Yes, they both can be used for dimensionality reduction.
Can we apply this for loaded file .csv format?
Yes.
Hi, I have one doubt. What happens if we give n_components=d where d is the no of dimensions. Does it denoise the data? Because it can’t reduce the dimensions.
It will do something, likely something not useful.
Hi Jason, thanks for the great work you are doing with your blog!
I think the attribute “explained_variance_” of the PCA class from scikit-learn returns the eigenvalues and not the singular values as you mention in the section “Reusable Principal Component Analysis”. For the singular values there is another attribute which is “singular_values_”. Correct?
Also, “single values” should read “eigenvalues” in the sentence “…that we achieve the same principal components, singular values, and projection as in…”. Correct?
Correct, fixed.
Thanks for pointing out the typo!
Hello teacher. can help you me ? I wanna now how to implement a CPA?
What is CPA?
I´m sorry. I mean PCA
I think he has explained that in tutorial
Hi Jason,
Is there similar support for R or Matlab users? I’m trying to find a workshop / training in this area, if you could recommend anything that may help.
I don’t know sorry.
Great post!
I found a typo: In the initial explanation, it’s said:
P = B^T . A
In the manual calculation:
P = vectors.T.dot(C.T)
Which one is correct? The original A or the mean-centered C?
No typo, perhaps confusing explanation.
B == vectors (components)
A == C (centered data to project)
When I copy the code from section “Reusable Principal Component Analysis” and run in a Jupyter notebook with a Python3.6 kernel, I get a different output to what is shown on site.
The values for the Eigenvectors and Matrix B are the same but the polarity is not the same.
Any idea what is causing the mismatch?
[[1 2]
[3 4]
[5 6]]
[[ 0.70710678 0.70710678]
[-0.70710678 0.70710678]]
[8. 0.]
[[-2.82842712e+00 -2.22044605e-16]
[ 0.00000000e+00 0.00000000e+00]
[ 2.82842712e+00 2.22044605e-16]]
Yes, I address this in the post.
Minor differences and differences in sign can occur due to differences across platforms from multiple runs of the solver (used under the covers).
These matrix operations require converging a solution, they are not entirely deterministic like arithmetic, we are approximating.
Hi Jason,
Is there any way to get PCs with same polarity and order?
Sort them by magnitude and ignore sign.
Is there a way to store the PCA model after fit() during training and reuse that model later (by loading from saved file) on live data ?
Yes, you can save the elements to file in plain text or as pickled python objects.
Hi Jason
while computing the mean, shouldn’t the axis be equal to 0 rather than 1? since each dimension or feature must be averaged rather than each data point
I believe 0 would be row-wise, 1 is column wise
This is not from stratch at all. Calculating covariance matrix and eigenvalue decomposition of is it an important part, which this tutorial skips totally.
Thanks for the note, more on covar here:
https://machinelearningmastery.com/introduction-to-expected-value-variance-and-covariance/
More on eigendecomposition here:
https://machinelearningmastery.com/introduction-to-eigendecomposition-eigenvalues-and-eigenvectors/
Dude this is still not from scratch. You just explain what eigenvectors and eigenvalues are then use a toolbox to do the dirty work for you. Can you please explain the details of finding the eigenvectors?
Sure, see this post:
https://machinelearningmastery.com/introduction-to-eigendecomposition-eigenvalues-and-eigenvectors/
HI Jason,
I have a doubt , is there u are saying PCA with eigenvector and PCA with svd both are different ? or i understood wrong,
secondly can we use together ?
PCA and SVD are different.
Hi Jason,
Can you extend PCA and Hotelling’s T^2 for confidence interval in python.
Thanks,
Venkat
Sorry, what are you referring to exactly?
Hi Jason, I found extracting top PCA explaining 90% of the variance, boosting to a large degree my h2o.deeplearning model to a +99% overall accuracy, AUC, tpr and npr. It is so good once the model is applied to my the test set to look unreal (basically only one misprediction out of 1k+ observations in my confusion matrix). I am not versant with the orthogonal transformations underlying PCA, but I was wondering: would PCA be the cause of overfitting on my data set? How is it possible to get to such an amazing result? How reliable would be my model over future and unseen observations?
Thanks
Yes, the transform must be calculated on the train dataset only, then applied to train and test sets.
I see waht you mean. Thanks!
Could you please explain more about pca.fit() and pca.transform what exactly is happening when we call these two ?
Great question, fit is converging on a solution, e.g. finding the eigenvectors and eigenvalues.
It might help to check the API documentation.
What is the difference between Split Zone design and Split Plot design?
I have not heard these terms before, sorry.
What is the content?
Amazing description Sir, but in the manual computation of PCA I’m having a different dataset having 1140 eigen vectors and want only 100 of them corresponding to their eigen values. So, how to choose the components and form the feature vector.
Perhaps choose the 100 largest?
i stil confuse with this, could u give me an explanation about “.T” do in this code?
V = cov(C.T)
Transpose.
https://en.wikipedia.org/wiki/Transpose
Hi Jason,
have you ever tried PCA to a existing data sets?
LIke UC Merced LandUse or AID?
I want to calculate PCA on features of these data sets extracted with some-pretrained CNN (the dimensions of the feature vectors are 100.000+).
Do you recommend it and how?
I have, and I believe I have tutorials on it:
https://machinelearningmastery.com/feature-selection-machine-learning-python/
Thank you very much for your answer
You’re welcome.
Hi Jason,
Really nice Blog.
But I don’t understand why you ‘d to transpose Centered matrix to calculate covariance matrix.
# calculate covariance matrix of centered matrix
V = cov(C.T)
Yes, it could be simpler, thanks.
Hi Jason,
One more thing which I don’t understand is why the sign & order of principal components are different than PCs that obtained from scikit-learn PCA?
[[ 0.70710678 0.70710678]
[ 0.70710678 -0.70710678]]
[[ 0.70710678 -0.70710678]
[ 0.70710678 0.70710678]]
Different numerical solvers used under the covers – you can ignore the sign/order.
Dear Jason,
Thank you very much for this useful article.
A small note about the centering:
”
# center columns by subtracting column means
C = A – M
print(C)
# calculate covariance matrix of centered matrix
V = cov(C.T)
”
I guess that there is no need to center A, when we calculate the covariance.
cov(C.T) = cov(A.T)
However, it could be helpful for the readers to calculate the covariance from C:
V = np.matmul(C.T, C) / C.shape[1]
Nice, thanks!
Hi Jason,
Very usefull article.
How to make a prediction for a single row by a model trained on data after PCA transforrmation?
Do I have to make a PCA transformation on this new row also which seems senseless?
Thanks in advance
Use a pipeline that has the pca and model in it, fit on all data, then call predict.
Very thank you!
In the discussion, you said we need use B = select(values, vectors) to selcet K number largest value and vectors, but How can I set the select value, How can I defind the code like K = 10?
Perhaps test different values for your dataset.
thank for great tutorial but i have question regarding how to get new data from the pca1 and pca2 to implement another machine learning alog ,
Sorry, I don’t understand. What do you mean exactly?
when calculating mean axis=1 calculates mean rowwise. I believe it should be axis = 0
Note we calculate the mean on A.T not A.
Hi Jason,
Great post as usual!
If I train a model using the complete train data set, then I test it on unseen test data set, I get to some accuracy and recall results.
If I do the same training on the 3 principal components version of the same train data set, then I test it on the 3 principal components version of the same unseen test data set, then I get to different accuracy and recall results (these are better results).
Despite the temptation of having better accuracy results, I suppose that this improvement was circumstantial, so I guess we should use the complete data set (not the reduced PC version) because it represents the complete data variability, while PCA is a projection of the same data. I guess that in the long run we will have more consistent results in the complete data set.
What do you think?
Thanks!
You cannot “test” a model on new data where you do not have the target values.
You can train a model on all data and make predictions on new data, but you cannot calculate a score, as you will not have the targets. You will already know the score of the model from your test harness (e.g. cross-validation, etc.)
Yes , I agree.
I should’ve said hold-out data rather than new unseen data, because I do have the target values for those hold-out data.
In the concrete case I am working, I have 22 days of data. I used the first 21 days for training (80%) and validation (20%) with great results in accuracy & recall in all the cross-validations.
Then I used hold-out data from day 22 for testing, and that’s where I got a terrible accuracy and recall.
After that is when I tried using PCAs data rather than regular data. Since I got (apparently) better accuracy results with PCs, I felt somehow that it wasn’t correct, so that’s why I sent my previous post.
Thank you for your patiente in answering all these comments.
Jose
Right.
For PCA, you can prepare or fit the transform on the train set then apply it to the train and test sets, just like scaling and other transforms. That would be the appropriate way to use it to avoid data leakage.
Also, for time series, consider using walk-forward validation.
Does that help?
Yes Jason! Thank you!
I was already following your suggestion on PCA about fit transform on training set and apply it to test set to keep data transformations consistent.
Thank you also for your post https://machinelearningmastery.com/backtest-machine-learning-models-time-series-forecasting/ on walk-forward validation. I always learn something more with your posts.
I understand that PCA is often used to make data easy to explore and visualize. The concern in my initial question was about the convenience or correctness of using PC data (small number of features) for training and predicting instead of using the original data set (large number of features). If you have a comment on this last point I appreciate.
Thank you again
If it results in better performance, use it.
thank you so much, spent a whole day learning PCA(matrices, null space, correlation, covariance, eigenvectors, etc) , finally got here, this is the best, connected the abstract theory to concrete reality, without this practice, I think, I can never really understand.
Thanks, well done on your progress!
Hey Jason! thanks for this tutorial, I applied PCA on iris dataset and chose 2 components, I did it manually and also using sklearn library. But my 2nd component value signs has been changed from positive to negative vice versa when compared to the sklearn usage. Is that an issue?
Yes, the signs can change, this is to be expected.
I believe this is discussed in the above tutorial.
Sir!
can you please explain PCA with some example like iris or other.I mean loading the file from csv then splitting the vectors and labels, doing pca on vectors and then concatenating the pca vectors and labels, storing back to excel.
regards
Perhaps this example will help:
https://machinelearningmastery.com/principal-components-analysis-for-dimensionality-reduction-in-python/
If you need help loading a dataset, see this:
https://machinelearningmastery.com/load-machine-learning-data-python/
sir is we include components which have higher value of correlation for classification or which have lesser value of co-relation components
Sorry, I don’t understand your question, perhaps you could elaborate?
how can i know the features selected by PCA?
PCA does not select features, it creates new features from the data.
Dear Dr. Jason,
Thank you a Lot for all your work.
I have a different case that I want to use dimensionality reduction model.
In fact, I have a dataset with 40 feature where 25 are categorical – nominal features.
Then the created space if we want to one hot encoding it is Giant. It wants 900 Gib to allocate.
Is there any method to deal a dimensionality reduction for categorical variables “”before”” one hot enconding them??
Best Regards,
Good question, I’m not sure off the cuff. I recommend checking the literature. I bet there is a version of PCA that supports categorical inputs!
The problem is that I want to do the reduction before transforming with onehotencoder.
I found in the litterature the hash encoder. Do you advice working with it?
Best Regards,
No, sorry. Perhaps try it and compare results to other methods.
Thank you so Much…….. Great
This is a great tutorial, and I will share it with my students. But I don’t think you need to subtract the mean to compute the covariance–the covariance calculation does that automatically.
Thank you for the feedback Chris!
Sorry, I see now you centered the data so that you could project the centered data vectors onto the eigenvectors. It might be clearer if you do the centering at the end, so you don’t leave the impression that centering is necessary to compute PCA.
Thank you for the feedback Chris!