Better Understand Your Data in R Using Descriptive Statistics

You must become intimate with your data.

Any machine learning models that you build are only as good as the data that you provide them. The first step in understanding your data is to actually look at some raw values and calculate some basic statistics.

In this post, you will discover how you can quickly get a handle on your dataset with descriptive statistics examples and recipes in R.

These recipes are perfect for you if you are a developer just getting started using R for machine learning.

Kick-start your project with my new book Machine Learning Mastery With R, including step-by-step tutorials and the R source code files for all examples.

Let’s get started.

  • Update Nov/2016: As a helpful update, this tutorial assumes you have the mlbench and e1071 R packages installed. They can be installed by typing: install.packages(“e1071”, “mlbench”)
Descriptive Statistics Examples

Understand Your Data in R Using Descriptive Statistics
Photo by Enamur Reza, some rights reserved.

You Must Understand Your Data

Understanding the data that you have is critically important.

You can run techniques and algorithms on your data, but it is not until you take the time to truly understand your dataset that you can fully understand the context of the results you achieve.

Better Understanding Equals Better Results

A deeper understanding of your data will give you better results.

Taking the time to study the data you have will help you in ways that are less obvious. You build an intuition for the data and for the entities that individual records or observations represent. These can bias you towards specific techniques (for better or worse), but you can also be inspired.

For example, examine your data in detail may trigger ideas for specific techniques to investigate:

  • Data Cleaning. You may discover missing or corrupt data and think of various data cleaning operations to perform such as marking or removing bad data and imputing missing data.
  • Data Transforms. You may discover that some attributes have familiar distributions such as Gaussian or exponential giving you ideas of scaling or log or other transforms you could apply.
  • Data Modeling. You may notice properties of the data such as distributions or data types that suggest the use (or to not use) specific machine learning algorithms.

Use Descriptive Statistics

You need to look at your data. And you need to look at your data from different perspectives.

Inspecting your data will help you to build up your intuition and prompt you to start asking questions about the data that you have.

Multiple perspectives will challenge you to think about the data from different perspectives, helping you to ask more and better questions.

Two methods for looking at your data are:

  1. Descriptive Statistics
  2. Data Visualization

The first and best place to start is to calculate basic summary descriptive statistics on your data.

You need to learn the shape, size, type and general layout of the data that you have.

Let’s look at some ways that you can summarize your data using R.

Need more Help with R for Machine Learning?

Take my free 14-day email course and discover how to use R on your project (with sample code).

Click to sign-up and also get a free PDF Ebook version of the course.

Summarize Data in R With Descriptive Statistics

In this section, you will discover 8 quick and simple ways to summarize your dataset.

Each method is briefly described and includes a recipe in R that you can run yourself or copy and adapt to your own needs.

1. Peek At Your Data

The very first thing to do is to just look at some raw data from your dataset.

If your dataset is small you might be able to display it all on the screen. Often it is not, so you can take a small sample and review that.

The head function will display the first 20 rows of data for you to review and think about.

2. Dimensions of Your Data

How much data do you have? You may have a general idea, but it is much better to have a precise figure.

If you have a lot of instances, you may need to work with a smaller sample of the data so that model training and evaluation is computationally tractable. If you have a vast number of attributes, you may need to select those that are most relevant. If you have more attributes than instances you may need to select specific modeling techniques.

This shows the rows and columns of your loaded dataset.

3. Data Types

You need to know the types of the attributes in your data.

This is invaluable. The types will indicate the types of further analysis, types of visualization and even the types of machine learning algorithms that you can use.

Additionally, perhaps some attributes were loaded as one type (e.g. integer) and could in-fact be represented as another type (a categorical factor). Inspecting the types helps expose these  issues and spark ideas early.

This lists the data type of each attribute in your dataset.

4. Class Distribution

In a classification problem, you must know the proportion of instances that belong to each class value.

This is important because it may highlight an imbalance in the data, that if severe may need to be addressed with rebalancing techniques. In the case of a multi-class classification problem, it may expose class with a small or zero instances that may be candidates for removing from the dataset.

This recipe creates a useful table showing the number of instances that belong to each class as well as the percentage that this represents from the entire dataset.

5. Data Summary

There is a most valuable function called summary() that summarizes each attribute in your dataset in turn. This is a most valuable function.

The function creates a table for each attribute and lists a breakdown of values. Factors are described as counts next to each class label. Numerical attributes are described as:

  • Min
  • 25th percentile
  • Median
  • Mean
  • 75th percentile
  • Max

The breakdown also includes an indication of the number of missing values for an attribute (marked N/A).

You can see that this recipe produces a lot of information for you to review. Take your time and work through each attribute in turn.

6. Standard Deviations

One thing missing from the summary() function above are the standard deviations.

The standard deviation along with the mean are useful to know if the data has a Gaussian (or nearly Gaussian) distribution. For example, it can useful for a quick and dirty outlier removal tool, where any values that are more than three times the standard deviation from the mean are outside of 99.7 of the data.

This calculates the standard deviation for each numeric attribute in the dataset.

7. Skewness

If a distribution looks kind-of-Gaussian but is pushed far left or right it is useful to know the skew.

Getting a feeling for the skew is much easier with plots of the data, such as a histogram or density plot. It is harder to tell from looking at means, standard deviations and quartiles.

Nevertheless, calculating the skew up front gives you a reference that you can use later if you decide to correct the skew for an attribute.

The further the distribution of the skew value from zero, the larger the skew to the left (negative skew value) or right (positive skew value).

8. Correlations

It is important to observe and think about how attributes relate to each other.

For numeric attributes, an excellent way to think about attribute-to-attribute interactions is to calculate correlations for each pair of attributes.

This creates a symmetrical table of all pairs of attribute correlations for numerical data. Deviations from zero show more positive or negative correlation. Values above 0.75 or below -0.75 are perhaps more interesting as they show a high correlation. Values of 1 and -1 show full positive or negative correlation.

More Recipes

This list of data summarization methods is by no means complete, but they are enough to quickly give you a strong initial understanding of your dataset.

Some data summarization that you could investigate beyond the list of recipes above would be to look at statistics for subsets of your data. Consider looking into the aggregate() function in R.

Is there a data summarization recipe that you use that was not listed? Leave a comment below, I’d love to hear about it.

Tips To Remember

This section gives you some tips to remember when reviewing your data using summary statistics.

  • Review the numbers. Generating the summary statistics is not enough. Take a moment to pause, read and really think about the numbers you are seeing.
  • Ask why. Review your numbers and ask a lot of questions. How and why are you seeing specific numbers? Think about how the numbers relate to the problem domain in general and specific entities that observations relate to.
  • Write down ideas. Write down your observations and ideas. Keep a small text file or notepad and jot down all of the ideas for how variables may relate, for what numbers mean, and ideas for techniques to try later. The things you write down now while the data is fresh will be very valuable later when you are trying to think up new things to try.

You Can Summarize Your Data in R

You do not need to be an R programmer. Data summarization in R is very simple, as the recipes above can attest. If you are just getting started, you can copy and paste the recipes above and start learning how they work using the built-in help in R (for example: ?FunctionName).

You do not need to be good at statistics. The statistics used in this post are very simple, but you may have forgotten some of the basics. You can quickly browse Wikipedia for topics like Mean, Standard Deviation and Quartiles to refresh your knowledge.

Here is a short list:

For a related post, see: Crash Course in Statistics for Machine Learning.

You do not need your own datasets. Each example above uses a built-in dataset or a dataset provided by an R package. There are many interesting datasets in the dataset R package that you can investigate and play with. See the documentation for the datasets R package for more information.

Summary

In this post, you discovered the importance of describing your dataset before you start work on your machine learning project.

You discovered 8 different ways to summarize your dataset using R:

  1. Peek At Your Data
  2. Dimensions of Your Data
  3. Data Types
  4. Class Distribution
  5. Data Summary
  6. Standard Deviations
  7. Skewness
  8. Correlations

You also now have recipes that you can copy and paste into your project.

Action Step

Do you want to improve your skills using R or practicing machine learning in R?

Work through each example above.

  1. Open the R interactive environment.
  2. Type or copy-paste each recipe and understand how it works.
  3. Dive deeper and use the ?FunctionName to learn more about the specific functions used.

Report back and leave a comment, I’d love to hear how you went.

Do you have a question? Leave a comment and ask.

Discover Faster Machine Learning in R!

Master Machine Learning With R

Develop Your Own Models in Minutes

...with just a few lines of R code

Discover how in my new Ebook:
Machine Learning Mastery With R

Covers self-study tutorials and end-to-end projects like:
Loading data, visualization, build models, tuning, and much more...

Finally Bring Machine Learning To Your Own Projects

Skip the Academics. Just Results.

See What's Inside

33 Responses to Better Understand Your Data in R Using Descriptive Statistics

  1. Avatar
    Nader September 2, 2016 at 1:02 am #

    Thank you for this Fantastic Blog !!!

  2. Avatar
    Marshall A Dyson September 2, 2016 at 1:46 am #

    Nice article, I liked it.
    The line in section 5 between the boxes that states “for your to review” should probably be “for you to review”.
    First sentence in section 2 should probably ended with a question mark. The same goes for the second sentence in the “Tips To Remember” section under the “Ask why” point.

    • Avatar
      Marshall A Dyson September 2, 2016 at 1:49 am #

      Sorry, not section 5. Section 1.

    • Avatar
      Jason Brownlee September 2, 2016 at 8:10 am #

      Thanks Marshall, fixed.

  3. Avatar
    Ganesh September 15, 2016 at 3:43 pm #

    Awesome Jason! This will help me to get my hands dirty in R!

  4. Avatar
    Bao December 17, 2016 at 5:21 am #

    Thanks for the guidance. Very useful for beginner!

  5. Avatar
    bharat ram March 11, 2017 at 6:14 am #

    Thanks a ton Jason. I have been looking for some structured way of descriptive analysis for quite long time. Your article surely a great answer to many of my questions.

    Awesome collation and guide. Appreciate your help..

  6. Avatar
    Alessandro Fortunato October 18, 2017 at 10:24 pm #

    Which simple statistics do you recommend for a total random distribution?
    I have the following problem, because I have 130.000 systems and every month I count the number of total errors for each of them and up to now I could not find any distribution that fits the data.
    My interest is not on the monthly total amount but on the behavior of each of the machine.
    Machine1, januaryerrors,februaryerrors, marcherrors,…..

    • Avatar
      Dan Gustafsson November 27, 2020 at 7:46 am #

      What an interesting set-up! What is it that you want to learn about your 130 000 systems? Reduce the errors, or predict something??

  7. Avatar
    Andrey February 21, 2018 at 1:59 pm #

    Good note, but I would add visualization for clarity (histogram, boxplot and Q-Q plot)

  8. Avatar
    Sima June 9, 2018 at 7:32 pm #

    very useful thanks a lot

  9. Avatar
    MIsab July 22, 2018 at 10:41 pm #

    Very useful, as a beginner i learned a lot. thank u very much.

    skewness function (skew <- apply(PimaIndiansDiabetes[,1:8], 2, skewness)) was not working for me. any clarification?

    • Avatar
      Jason Brownlee July 23, 2018 at 6:10 am #

      Sorry to hear that, I’m not sure. Perhaps try posting on stackoverflow?

  10. Avatar
    Toinét Cronjé August 2, 2018 at 9:30 pm #

    Such a useful guide. Thank you very much!!!

  11. Avatar
    Elena June 20, 2019 at 10:33 am #

    If our data are from a clinical trial and there are a lot of missing values , this way of summary(my_data) works?

    What about the missing values?

    .Thank you very much.

  12. Avatar
    Anthony The Koala October 2, 2019 at 11:12 pm #

    Dear Dr Jason,
    I was able to get the simultaneous boxplots for the Pima Indians database

    BUT
    When I tried to get the simultaneous historgrams, it was a different story:

    Do you have an idea please?

    Thank you,
    Anthony of Sydney

    • Avatar
      Jason Brownlee October 3, 2019 at 6:50 am #

      Enumerate each column and create a histogram plot for each.

    • Avatar
      Kim January 24, 2021 at 9:14 am #

      You have a typo in you dataset name – Indians – missing s

  13. Avatar
    Anthony The Koala October 3, 2019 at 2:34 pm #

    Dear Dr Jason,
    Thank you,

    You get all 8 histograms on one plot.

    It shows that in R you can plot a group of boxplots in one line, BUT cannot plot a group of histograms in one plot.

    • Avatar
      Jason Brownlee October 4, 2019 at 5:38 am #

      I’m sure you can, there are thousands of packages out there.

      Also, a loop would do the same thing in 2 lines.

  14. Avatar
    Anthony The Koala October 3, 2019 at 2:36 pm #

    Dear Dr Jason,
    Apologies, the 1st line doo1 should not be there. Don’t know how it got there.
    The first line should be

    Thank you,
    Anthony of Sydney

  15. Avatar
    Anthony The Koala October 4, 2019 at 4:28 am #

    Dear Dr Jason,
    A more elegant solution, though it could be more elegant with labels.

    But still for multiple boxplots you can use one line, BUT not for a group of histograms.

    Thank you,
    Anthony of Sydney

  16. Avatar
    Chinasa Okonkwo September 27, 2022 at 10:37 pm #

    Thanks Jason.

    This is quite helpful.

    • Avatar
      James Carmichael September 28, 2022 at 6:44 am #

      You are very welcome Chinasa! We appreciate your support!

Leave a Reply