Classification And Regression Trees for Machine Learning

Decision Trees are an important type of algorithm for predictive modeling machine learning.

The classical decision tree algorithms have been around for decades and modern variations like random forest are among the most powerful techniques available.

In this post you will discover the humble decision tree algorithm known by it’s more modern name CART which stands for Classification And Regression Trees. After reading this post, you will know:

  • The many names used to describe the CART algorithm for machine learning.
  • The representation used by learned CART models that is actually stored on disk.
  • How a CART model can be learned from training data.
  • How a learned CART model can be used to make predictions on unseen data.
  • Additional resources that you can use to learn more about CART and related algorithms.

If you have taken an algorithms and data structures course, it might be hard to hold you back from implementing this simple and powerful algorithm. And from there, you’re a small step away from your own implementation of Random Forests.

Let’s get started.

Classification And Regression Trees for Machine Learning

Classification And Regression Trees for Machine Learning
Photo by Wonderlane, some rights reserved.

Decision Trees

Classification and Regression Trees or CART for short is a term introduced by Leo Breiman to refer to Decision Tree algorithms that can used for classification or regression predictive modeling problems.

Classically, this algorithm is referred to as “decision trees”, but on some platforms like R they are referred to by the more modern term CART.

The CART algorithm provides a foundation for important algorithms like bagged decision trees, random forest and boosted decision trees.

Get your FREE Algorithms Mind Map

Machine Learning Algorithms Mind Map

Sample of the handy machine learning algorithms mind map.

I've created a handy mind map of 60+ algorithms organized by type.

Download it, print it and use it. 

Download For Free


Also get exclusive access to the machine learning algorithms email mini-course.

 

 

CART Model Representation

The representation for the CART model is a binary tree.

This is your binary tree from algorithms and data structures, nothing too fancy. Each root node represents a single input variable (x) and a split point on that variable (assuming the variable is numeric).

The leaf nodes of the tree contain an output variable (y) which is used to make a prediction.

Given a dataset with two inputs (x) of height in centimeters and weight in kilograms the output of sex as male or female, below is a crude example of a binary decision tree (completely fictitious for demonstration purposes only).

Example Decision Tree

Example Decision Tree

The tree can be stored to file as a graph or a set of rules. For example, below is the above decision tree as a set of rules.

With the binary tree representation of the CART model described above, making predictions is relatively straightforward.

Given a new input, the tree is traversed by evaluating the specific input started at the root node of the tree.

A learned binary tree is actually a partitioning of the input space. You can think of each input variable as a dimension on an p-dimensional space. The decision tree split this up into rectangles (when p=2 input variables) or some kind of hyper-rectangles with more inputs.

New data is filtered through the tree and lands in one of the rectangles and the output value for that rectangle is the prediction made by the model. This gives you some feeling for the type of decisions that a CART model is capable of making, e.g. boxy decision boundaries.

For example, given the input of [height = 160 cm, weight = 65 kg], we would traverse the above tree as follows:

Learn a CART Model From Data

Creating a CART model involves selecting input variables and split points on those variables until a suitable tree is constructed.

The selection of which input variable to use and the specific split or cut-point is chosen using a greedy algorithm to minimize a cost function. Tree construction ends using a predefined stopping criterion, such as a minimum number of training instances assigned to each leaf node of the tree.

Greedy Splitting

Creating a binary decision tree is actually a process of dividing up the input space. A greedy approach is used to divide the space called recursive binary splitting.

This is a numerical procedure where all the values are lined up and different split points are tried and tested using a cost function. The split with the best cost (lowest cost because we minimize cost) is selected.

All input variables and all possible split points are evaluated and chosen in a greedy manner (e.g. the very best split point is chosen each time).

For regression predictive modeling problems the cost function that is minimized to choose spit points is the sum squared error across all training samples that fall within the rectangle:

sum(y – prediction)^2

Where y is the output for the training sample and prediction is the predicted output for the rectangle.

For classification the Gini cost function is used which provides an indication of how “pure” the leaf nodes are (how mixed the training data assigned to each node is).

G = sum(pk * (1 – pk))

Where G is the Gini cost over all classes, pk are the number of training instances with class k in the rectangle of interest. A node that has all classes of the same type (perfect class purity) will have G=0, where as a G that has a 50-50 split of classes for a binary classification problem (worst purity) will have a G=0.5.

Stopping Criterion

The recursive binary splitting procedure described above needs to know when to stop splitting as it works its way down the tree with the training data.

The most common stopping procedure is to use a minimum count on the number of training instances assigned to each leaf node. If the count is less than some minimum then the split is not accepted and the node is taken as a final leaf node.

The count of training members is tuned to the dataset, e.g. 5 or 10. It defines how specific to the training data the tree will be. Too specific (e.g. a count of 1) and the tree will overfit the training data and likely have poor performance on the test set.

Pruning The Tree

The stopping criterion is important as it strongly influences the performance of your tree. You can use pruning after learning your tree to further lift performance.

The complexity of a decision tree is defined as the number of splits in the tree. Simpler trees are preferred. They are easy to understand (you can print them out and show them to subject matter experts), and they are less likely to overfit your data.

The fastest and simplest pruning method is to work through each leaf node in the tree and evaluate the effect of removing it using a hold-out test set. Leaf nodes are removed only if it results in a drop in the overall cost function on the entire test set. You stop removing nodes when no further improvements can be made.

More sophisticated pruning methods can be used such as cost complexity pruning (also called weakest link pruning) where a learning parameter (alpha) is used to weigh whether nodes can be removed based on the size of the sub-tree.

Recursive Binary Splitting for Decision Trees

Recursive Binary Splitting for Decision Trees
Photo by Paul L Dineen, some rights reserved.

Data Preparation for CART

CART does not require any special data preparation other than a good representation of the problem.

Further Reading

This section lists some resources that you can refer to if you are looking to go deeper with CART.

Below are some good machine learning texts that describe the CART algorithm from a machine learning perspective.

Summary

In this post you have discovered the Classification And Regression Trees (CART) for machine learning. You learned:

  • The classical name Decision Tree and the more Modern name CART for the algorithm.
  • The representation used for CART is a binary tree.
  • Predictions are made with CART by traversing the binary tree given a new input record.
  • The tree is learned using a greedy algorithm on the training data to pick splits in the tree.
  • Stopping criteria define how much tree learns and pruning can be used to improve a learned tree.

Do you have any questions about CART or this post?
Ask in the comments and I will do my best to answer.

Frustrated With Machine Learning Math?

See How Algorithms Work in Minutes

...with just arithmetic and simple examples

Discover how in my new Ebook: Master Machine Learning Algorithms

It covers explanations and examples of 10 top algorithms, including:
Linear Regression, k-Nearest Neighbors, Support Vector Machines and much more...

Finally, Pull Back the Curtain on
Machine Learning Algorithms

Skip the Academics. Just Results.

Click to learn more.

17 Responses to Classification And Regression Trees for Machine Learning

  1. Audio Alief Kautsar Hartama April 27, 2016 at 10:39 pm #

    how about C 4.5 (which is called J48 in weka) and C 5.0, please make tutorial for that sir, i need it

  2. Yogesh June 10, 2016 at 3:52 pm #

    I have one sentence amd its polarity -ve or +ve, I want use CART for accuricy.But I am not able to understand how?

  3. joe August 29, 2016 at 5:46 pm #

    is it possible to infuse CART in GA?

  4. Mynose September 8, 2016 at 5:20 pm #

    Sir, i am wanting to compare CART and GA.

    • Jason Brownlee September 9, 2016 at 7:18 am #

      Hi Mynose, they are very different. CART is a function approximation method and a GA is a function optimization method.

      I guess you could use the GA to optimize a set of rules or a tree and compare that to the CART. Sounds fun.

  5. Lee November 2, 2016 at 4:48 pm #

    Hi, Jason! How can I avoid over-fitting problem when using a CART model. When I used a CART tree to classify different fault types of data, the cp is the only parameter for obtained a optimal CART model. But the tree structure of the training model is obviously over-fitted from my domain knowledge. So what should I do to avoid overfitting?

    • Jason Brownlee November 3, 2016 at 7:54 am #

      Hi Lee, great question.

      The main approach is to constrain the depth of the tree.

      You can do this when growing the tree, but the preferred method is to prune a deep tree after it is constructed.

  6. Rohit November 10, 2016 at 4:36 am #

    Hello Sir,

    How is the variable selection of input variables done while implementing the greedy algorithm . For calculating the minimum cost function you need the predicted values , but how does the algorithm select the variable from input variable for the first split.

    Regards,
    Rohit

    • Jason Brownlee November 10, 2016 at 7:43 am #

      Hi Rohit, we have the predicted values at the time we calculate splits – it’s all in the training data.

      Splits are fixed after training, then we just use them to make predictions on new data.

  7. madhav January 31, 2017 at 4:31 pm #

    Sir, i have following questions. It would be of great help if you could answer them for me.

    1)Is CART the modern name for decision tree approach in Data Science field.
    2)What are the scenarios where CART can be used.
    3)what are the advantages of using CART over other techniques of predicition.

    • Jason Brownlee February 1, 2017 at 10:45 am #

      Hi madhav,

      1. Yes, CART or classification and regression trees is the modern name for the standard decision tree.
      2. Very widely on classification and regression predictive modeling problems. Try it and see.
      3. Fast to train, easy to understand result and generally quite effective.

  8. Farina March 29, 2017 at 4:17 pm #

    Hi Jason

    I am wondering why my CART produced only one nodes when I exclude one variable for example ID? I tried to change the cp but it is still giving the same results. Can you assist me on this?

  9. Winston Jade Molit April 23, 2017 at 5:10 pm #

    Is CART algorithm appropriate for decision making projects?

    • Jason Brownlee April 24, 2017 at 5:33 am #

      That depends if the decision can be framed as a classification or regression type problem.

  10. Luc May 12, 2017 at 10:41 pm #

    Hi Jason,

    I am new into machine learning. For an intership it is asked if it is possible to make a classification method. The input variables are a small number of words(varying from 1-6), output variables are 0 or 1. Is it possible to apply CART for this problem? I am having difficulties finding on what kind of problems different algorithms are used, do you have tips?

Leave a Reply