You must understand your data in order to get the best results from machine learning algorithms.

The fastest way to learn more about your data is to use data visualization.

In this post you will discover exactly how you can visualize your machine learning data in Python using Pandas.

Let’s get started.

## About The Recipes

Each recipe in this post is complete and standalone so that you can copy-and-paste it into your own project and use it immediately.

The Pima Indians dataset is used to demonstrate each plot. This dataset describes the medical records for Pima Indians and whether or not each patient will have an onset of diabetes within five years. As such it is a classification problem.

It is a good dataset for demonstration because all of the input attributes are numeric and the output variable to be predicted is binary (0 or 1).

The data is freely available from the UCI Machine Learning Repository and is downloaded directly as part of each recipe.

### Need help with Machine Learning in Python?

Take my free 2-week email course and discover data prep, algorithms and more (with sample code).

Click to sign-up now and also get a free PDF Ebook version of the course.

## Univariate Plots

In this section we will look at techniques that you can use to understand each attribute independently.

### Histograms

A fast way to get an idea of the distribution of each attribute is to look at histograms.

Histograms group data into bins and provide you a count of the number of observations in each bin. From the shape of the bins you can quickly get a feeling for whether an attribute is Gaussian’, skewed or even has an exponential distribution. It can also help you see possible outliers.

1 2 3 4 5 6 7 8 |
# Univariate Histograms import matplotlib.pyplot as plt import pandas url = "https://archive.ics.uci.edu/ml/machine-learning-databases/pima-indians-diabetes/pima-indians-diabetes.data" names = ['preg', 'plas', 'pres', 'skin', 'test', 'mass', 'pedi', 'age', 'class'] data = pandas.read_csv(url, names=names) data.hist() plt.show() |

We can see that perhaps the attributes age, pedi and test may have an exponential distribution. We can also see that perhaps the *mass* and *pres* and *plas* attributes may have a Gaussian or nearly Gaussian distribution. This is interesting because many machine learning techniques assume a Gaussian univariate distribution on the input variables.

### Density Plots

Density plots are another way of getting a quick idea of the distribution of each attribute. The plots look like an abstracted histogram with a smooth curve drawn through the top of each bin, much like your eye tried to do with the histograms.

1 2 3 4 5 6 7 8 |
# Univariate Density Plots import matplotlib.pyplot as plt import pandas url = "https://archive.ics.uci.edu/ml/machine-learning-databases/pima-indians-diabetes/pima-indians-diabetes.data" names = ['preg', 'plas', 'pres', 'skin', 'test', 'mass', 'pedi', 'age', 'class'] data = pandas.read_csv(url, names=names) data.plot(kind='density', subplots=True, layout=(3,3), sharex=False) plt.show() |

We can see the distribution for each attribute is clearer than the histograms.

### Box and Whisker Plots

Another useful way to review the distribution of each attribute is to use Box and Whisker Plots or boxplots for short.

Boxplots summarize the distribution of each attribute, drawing a line for the median (middle value) and a box around the 25th and 75th percentiles (the middle 50% of the data). The whiskers give an idea of the spread of the data and dots outside of the whiskers show candidate outlier values (values that are 1.5 times greater than the size of spread of the middle 50% of the data).

1 2 3 4 5 6 7 8 |
# Box and Whisker Plots import matplotlib.pyplot as plt import pandas url = "https://archive.ics.uci.edu/ml/machine-learning-databases/pima-indians-diabetes/pima-indians-diabetes.data" names = ['preg', 'plas', 'pres', 'skin', 'test', 'mass', 'pedi', 'age', 'class'] data = pandas.read_csv(url, names=names) data.plot(kind='box', subplots=True, layout=(3,3), sharex=False, sharey=False) plt.show() |

We can see that the spread of attributes is quite different. Some like *age*, *test* and *skin* appear quite skewed towards smaller values.

## Multivariate Plots

This section shows examples of plots with interactions between multiple variables.

### Correlation Matrix Plot

Correlation gives an indication of how related the changes are between two variables. If two variables change in the same direction they are positively correlated. If the change in opposite directions together (one goes up, one goes down), then they are negatively correlated.

You can calculate the correlation between each pair of attributes. This is called a correlation matrix. You can then plot the correlation matrix and get an idea of which variables have a high correlation with each other.

This is useful to know, because some machine learning algorithms like linear and logistic regression can have poor performance if there are highly correlated input variables in your data.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
# Correction Matrix Plot import matplotlib.pyplot as plt import pandas import numpy names = ['preg', 'plas', 'pres', 'skin', 'test', 'mass', 'pedi', 'age', 'class'] data = pandas.read_csv(url, names=names) correlations = data.corr() # plot correlation matrix fig = plt.figure() ax = fig.add_subplot(111) cax = ax.matshow(correlations, vmin=-1, vmax=1) fig.colorbar(cax) ticks = numpy.arange(0,9,1) ax.set_xticks(ticks) ax.set_yticks(ticks) ax.set_xticklabels(names) ax.set_yticklabels(names) plt.show() |

We can see that the matrix is symmetrical, i.e. the bottom left of the matrix is the same as the top right. This is useful as we can see two different views on the same data in one plot. We can also see that each variable is perfectly positively correlated with each other (as you would expected) in the diagonal line from top left to bottom right.

### Scatterplot Matrix

A scatterplot shows the relationship between two variables as dots in two dimensions, one axis for each attribute. You can create a scatterplot for each pair of attributes in your data. Drawing all these scatterplots together is called a scatterplot matrix.

Scatter plots are useful for spotting structured relationships between variables, like whether you could summarize the relationship between two variables with a line. Attributes with structured relationships may also be correlated and good candidates for removal from your dataset.

1 2 3 4 5 6 7 8 9 |
# Scatterplot Matrix import matplotlib.pyplot as plt import pandas from pandas.tools.plotting import scatter_matrix names = ['preg', 'plas', 'pres', 'skin', 'test', 'mass', 'pedi', 'age', 'class'] data = pandas.read_csv(url, names=names) scatter_matrix(data) plt.show() |

Like the Correlation Matrix Plot, the scatterplot matrix is symmetrical. This is useful to look at the pair-wise relationships from different perspectives. Because there is little point oi drawing a scatterplot of each variable with itself, the diagonal shows histograms of each attribute.

## Summary

In this post you discovered a number of ways that you can better understand your machine learning data in Python using Pandas.

Specifically, you learned how to plot your data using:

- Histograms
- Density Plots
- Box and Whisker Plots
- Correlation Matrix Plot
- Scatterplot Matrix

Open your Python interactive environment and try out each recipe.

Do you have any questions about Pandas or the recipes in this post? Ask in the comments and I will do my best to answer.

Hi Jason Brownlee,

Thanks for this post. Till now I using different python visuvalization libraries like matplotlib , plotly or seaborn for getting more out of the data which I have loaded into pandas dataframe. Till now I am not aware of using the pandas itself for visulzation.

From now onwards I am gonna use your recipe for visualization.

I’m glad the post was useful saimadhu.

Hello Jason,

what we can deduce from class variable box plot, why and when we get this kind of plot.

Great question naresh.

Box plots are great for getting a snapshot of the spread of the data and where the meat of the data is on the scale. It also quickly helps you spot outliers (outside the whiskers or really > or < 1.5 x IQR).

Hello jason ,

While I try to create correlation matrix for my own dataset having 12 variables, however in matrix only 7 variables have colored matrix and left 5 have white color.I just change this

“ticks=np.arange(0,12,1)” form 9 to 12 ,

import numpy as np

names=[‘PassId’,’Sur’,’Pclas’,’Name’,’Sex’,’Age’,’SibSp’,’Parch’,’Ticket’,’Fare’,’Cabin’,’Emb’]

correlation=train.corr()

#create a correlation matrix

fig = plt.figure()

ax = fig.add_subplot(111)

cax = ax.matshow(correlation, vmin=-1, vmax =1)

fig.colorbar(cax)

ticks=np.arange(0,12,1)

ax.set_xticks(ticks)

ax.set_yticks(ticks)

ax.set_xticklabels(names)

ax.set_yticklabels(names)

plt.show()

similar case is with scatter plot ,could you please let me know where I have the issue

and also one more thing how we decide which scatter plot is highly valuable

Great question naresh, I don’t know off the top of my head.

I would suggest looking into how to specify your own color lists to the function. Perhaps the limit is 6-7 defaults.

Hi Jason,

I am curious about how to make a plot for the probability from a multivariate logistic regression. Do you have any ideas or examples of doing that?

Thank you.

Not off hand sorry.

Consider trying a few different approaches and see what conveys the best understanding.

Also consider copying an approach used in an existing analysis.

Forgive my ignorance, but isn’t this using matplotlib to visualise data? Not Pandas?

Yes, matplotlib via pandas wrappers.

Hi Jason,

Do you have blog which explains binary classification( and visualization) using categorical data. It would help me a lot.

I have a few in Python. Try the search.

Are you looking for something specific?

I have categorical and continuous variables feature set. After predictions(binary classification) I want to visualize how each and combination of features resulted in prediction. For e.g. I have categorical feature like income range, gender, occupation and age as continuous feature. How these features influenced the prediction.

Often we give up the understandability of the predictions for better predictive skill in applied machine learning.

Generally, for deeper understanding of why predictions are being made, you can use linear models and give up some predictive skill.