[New Book] Click to get The Beginner's Guide to Data Science!
Use the offer code 20offearlybird to get 20% off. Hurry, sale ends soon!

Non-Linear Regression in R with Decision Trees

In this post, you will discover 8 recipes for non-linear regression with decision trees in R.

Each example in this post uses the longley dataset provided in the datasets package that comes with R.

The longley dataset describes 7 economic variables observed from 1947 to 1962 used to predict the number of people employed yearly.

Kick-start your project with my new book Machine Learning Mastery With R, including step-by-step tutorials and the R source code files for all examples.

Let’s get started.

decision tree

Decision Tree
Photo by Katie Walker, some rights reserved

Classification and Regression Trees

Classification and Regression Trees (CART) split attributes based on values that minimize a loss function, such as sum of squared errors.

The following recipe demonstrates the recursive partitioning decision tree method on the longley dataset.

Learn more about the rpart function and the rpart package.

Conditional Decision Trees

Condition Decision Trees are created using statistical tests to select split points on attributes rather than a loss function.

The following recipe demonstrates the condition inference trees method on the longley dataset.

Learn more about the ctree function and the party package.

Need more Help with R for Machine Learning?

Take my free 14-day email course and discover how to use R on your project (with sample code).

Click to sign-up and also get a free PDF Ebook version of the course.

Model Trees

Model Trees create a decision tree and use a linear model at each node to make a prediction rather than using an average value.

The following recipe demonstrates the M5P Model Tree method on the longley dataset.

Learn more about the M5P function and the RWeka package.

Rule System

Rule Systems can be crated by extracting and simplifying the rules from a decision tree.

The following recipe demonstrates the M5Rules Rule System on the longley dataset.

Learn more about the M5Rules function and the RWeka package.

Bagging CART

Bootstrapped Aggregation (Bagging) is an ensemble method that creates multiple models of the same type from different sub-samples of the same dataset. The predictions from each separate model are combined together to provide a superior result. This approach has shown participially effective for high-variance methods such as decision trees.

The following recipe demonstrates bagging applied to the recursive partitioning decision tree.

Learn more about the bagging function and the ipred package.

Random Forest

Random Forest is variation on Bagging of decision trees by reducing the attributes available to making a tree at each decision point to a random sub-sample. This further increases the variance of the trees and more trees are required.

Learn more about the randomForest function and the randomForest package.

Gradient Boosted Machine

Boosting is an ensemble method developed for classification for reducing bias where models are added to learn the misclassification errors in existing models. It has been generalized and adapted in the form of Gradient Boosted Machines (GBM) for use with CART decision trees for classification and regression.

Learn more about the gbm function and the gbm package.

Cubist

Cubist decision trees are another ensemble method. They are constructed like model trees but involve a boosting-like procedure called committees that re rule-like models.

Learn more about the cubist function and the Cubist package.

Summary

In this post you discovered 8 recipes for decision trees for non-linear regression in R. Each recipe is ready for you to copy-and-paste into your own workspace and modify for your needs.

For more information see Chapter 8 of Applied Predictive Modeling by Kuhn and Johnson that provides an excellent introduction to non-linear regression with decision trees with R for beginners.

Discover Faster Machine Learning in R!

Master Machine Learning With R

Develop Your Own Models in Minutes

...with just a few lines of R code

Discover how in my new Ebook:
Machine Learning Mastery With R

Covers self-study tutorials and end-to-end projects like:
Loading data, visualization, build models, tuning, and much more...

Finally Bring Machine Learning To Your Own Projects

Skip the Academics. Just Results.

See What's Inside

16 Responses to Non-Linear Regression in R with Decision Trees

  1. Avatar
    Sonika November 20, 2014 at 4:07 am #

    please tell me about Genetic algorithm code in R as above u mansion.

  2. Avatar
    Ron March 2, 2015 at 1:19 am #

    Thank a lot for the these guide.

  3. Avatar
    Will May 15, 2015 at 11:33 am #

    Should RMSE here be sqrt(mean((actual-predicted)^2))?

  4. Avatar
    Matthew August 19, 2015 at 12:00 pm #

    Hi! I would just like to ask if what decision tree is best for use when the data is highly quantitative? For example, a weather data set. Thanks!

  5. Avatar
    Carlos Aguayo September 14, 2015 at 11:44 am #

    Great guide & website!

    There’s a tiny typo in this sentence (crated -> created):
    “Rule Systems can be crated by extracting and simplifying the rules from a decision tree.”

  6. Avatar
    Bala October 13, 2015 at 8:48 pm #

    I have certain clarification in ranking the decision trees. I have certain features. X1, X2, X3 which can be labelled to Y. After building the model, I have to predict the ranking of trees based on feature X1. Can you please suggest some good methods for this ?

  7. Avatar
    Arash December 1, 2015 at 2:43 pm #

    Thank you so much 🙂
    Short, but very useful and comprehensive

  8. Avatar
    R-lover December 14, 2015 at 3:51 am #

    But this is not nonlinear regression – if I’m not mistaken, isn’t this just multiple linear modelling?

    What is the connection between tree-methods (including RF, boosting, etc) and actual nonlinear regression such as Michaelis-Menton model used in enzyme kinetics, models used in PK/PD modelling, nonlinear synergy models?

    Look forward to hearing your thoughts.

  9. Avatar
    SFer November 26, 2016 at 11:53 am #

    “Will May 15, 2015 at 11:33 am #

    Shouldn’t RMSE here be:
    sqrt(mean((actual-predicted)^2))?”

    Will’s comment (above this one),
    is absolutely correct!

    Too bad Author (of this otherwise great article),
    has not answered and corrected this mistake
    in the rmse calculation…

    Using the correct rmse formula (above),
    returns a completely different rmse value…

    Be aware…

  10. Avatar
    Gauthier March 28, 2018 at 8:11 pm #

    Hey Jason, many thanks for your examples. How would you deal with the fact that most of these models do not generalize well to new validation data ? – using the option subset = in your examples. Only rule systems and model trees seem to generalize correctly to my data:

    data <- cbind(1:10000, -1:-10000,c(2:10001)+runif(10000,min=0,max=0.1))
    data <- cbind(1:10000, c(-1:-10000)+runif(10000,min=0,max=0.1))
    data <- cbind(data, 1000*c(1:10000)/10*sin(data[,1])+(data[,2]^2)/10+runif(10000,min=0,max=0.1)/100 ) #
    colnames(data) <- c("x1","x2","y")

    • Avatar
      Jason Brownlee March 29, 2018 at 6:34 am #

      It is really data dependent.

      If you find a subset of methods that work well on your data, then double down on them.

  11. Avatar
    Jakes May 22, 2018 at 5:26 pm #

    Hey Jason,

    Great Guide and thanks for making us more awesome.
    Quick question: How do I extract rules/path from random forest tree in R for predicted rows?

    Thanks

    • Avatar
      Jason Brownlee May 23, 2018 at 6:24 am #

      Thanks.

      I’m not sure that this would be tractable given the vast number of individual decisions.

  12. Avatar
    Ibrahim chaoudi August 6, 2019 at 2:24 am #

    thanks a lot

Leave a Reply