[New Book] Click to get The Beginner's Guide to Data Science!
Use the offer code 20offearlybird to get 20% off. Hurry, sale ends soon!

A Gentle Introduction to the Gradient Boosting Algorithm for Machine Learning

Gradient boosting is one of the most powerful techniques for building predictive models.

In this post you will discover the gradient boosting machine learning algorithm and get a gentle introduction into where it came from and how it works.

After reading this post, you will know:

  • The origin of boosting from learning theory and AdaBoost.
  • How gradient boosting works including the loss function, weak learners and the additive model.
  • How to improve performance over the base algorithm with various regularization schemes.

Kick-start your project with my new book XGBoost With Python, including step-by-step tutorials and the Python source code files for all examples.

Let’s get started.

A Gentle Introduction to the Gradient Boosting Algorithm for Machine Learning

A Gentle Introduction to the Gradient Boosting Algorithm for Machine Learning
Photo by brando.n, some rights reserved.

Need help with XGBoost in Python?

Take my free 7-day email course and discover xgboost (with sample code).

Click to sign-up now and also get a free PDF Ebook version of the course.

The Origin of Boosting

The idea of boosting came out of the idea of whether a weak learner can be modified to become better.

Michael Kearns articulated the goal as the “Hypothesis Boosting Problem” stating the goal from a practical standpoint as:

… an efficient algorithm for converting relatively poor hypotheses into very good hypotheses

Thoughts on Hypothesis Boosting [PDF], 1988

A weak hypothesis or weak learner is defined as one whose performance is at least slightly better than random chance.

These ideas built upon Leslie Valiant’s  work on distribution free or Probably Approximately Correct (PAC) learning, a framework for investigating the complexity of machine learning problems.

Hypothesis boosting was the idea of filtering observations, leaving those observations that the weak learner can handle and focusing on developing new weak learns to handle the remaining difficult observations.

The idea is to use the weak learning method several times to get a succession of hypotheses, each one refocused on the examples that the previous ones found difficult and misclassified. … Note, however, it is not obvious at all how this can be done

Probably Approximately Correct: Nature’s Algorithms for Learning and Prospering in a Complex World, page 152, 2013

AdaBoost the First Boosting Algorithm

The first realization of boosting that saw great success in application was Adaptive Boosting or AdaBoost for short.

Boosting refers to this general problem of producing a very accurate prediction rule by combining rough and moderately inaccurate rules-of-thumb.

A decision-theoretic generalization of on-line learning and an application to boosting [PDF], 1995

The weak learners in AdaBoost are decision trees with a single split, called decision stumps for their shortness.

AdaBoost works by weighting the observations, putting more weight on difficult to classify instances and less on those already handled well. New weak learners are added sequentially that focus their training on the more difficult patterns.

This means that samples that are difficult to classify receive increasing larger weights until the algorithm identifies a model that correctly classifies these samples

Applied Predictive Modeling, 2013

Predictions are made by majority vote of the weak learners’ predictions, weighted by their individual accuracy. The most successful form of the AdaBoost algorithm was for binary classification problems and was called AdaBoost.M1.

You can learn more about the AdaBoost algorithm in the post:

Generalization of AdaBoost as Gradient Boosting

AdaBoost and related algorithms were recast in a statistical framework first by Breiman calling them ARCing algorithms.

Arcing is an acronym for Adaptive Reweighting and Combining. Each step in an arcing algorithm consists of a weighted minimization followed by a recomputation of [the classifiers] and [weighted input].

Prediction Games and Arching Algorithms [PDF], 1997

This framework was further developed by Friedman and called Gradient Boosting Machines. Later called just gradient boosting or gradient tree boosting.

The statistical framework cast boosting as a numerical optimization problem where the objective is to minimize the loss of the model by adding weak learners using a gradient descent like procedure.

This class of algorithms were described as a stage-wise additive model. This is because one new weak learner is added at a time and existing weak learners in the model are frozen and left unchanged.

Note that this stagewise strategy is different from stepwise approaches that readjust previously entered terms when new ones are added.

Greedy Function Approximation: A Gradient Boosting Machine [PDF], 1999

The generalization allowed arbitrary differentiable loss functions to be used, expanding the technique beyond binary classification problems to support regression, multi-class classification and more.

How Gradient Boosting Works

Gradient boosting involves three elements:

  1. A loss function to be optimized.
  2. A weak learner to make predictions.
  3. An additive model to add weak learners to minimize the loss function.

1. Loss Function

The loss function used depends on the type of problem being solved.

It must be differentiable, but many standard loss functions are supported and you can define your own.

For example, regression may use a squared error and classification may use logarithmic loss.

A benefit of the gradient boosting framework is that a new boosting algorithm does not have to be derived for each loss function that may want to be used, instead, it is a generic enough framework that any differentiable loss function can be used.

2. Weak Learner

Decision trees are used as the weak learner in gradient boosting.

Specifically regression trees are used that output real values for splits and whose output can be added together, allowing subsequent models outputs to be added and “correct” the residuals in the predictions.

Trees are constructed in a greedy manner, choosing the best split points based on purity scores like Gini or to minimize the loss.

Initially, such as in the case of AdaBoost, very short decision trees were used that only had a single split, called a decision stump. Larger trees can be used generally with 4-to-8 levels.

It is common to constrain the weak learners in specific ways, such as a maximum number of layers, nodes, splits or leaf nodes.

This is to ensure that the learners remain weak, but can still be constructed in a greedy manner.

3. Additive Model

Trees are added one at a time, and existing trees in the model are not changed.

A gradient descent procedure is used to minimize the loss when adding trees.

Traditionally, gradient descent is used to minimize a set of parameters, such as the coefficients in a regression equation or weights in a neural network. After calculating error or loss, the weights are updated to minimize that error.

Instead of parameters, we have weak learner sub-models or more specifically decision trees. After calculating the loss, to perform the gradient descent procedure, we must add a tree to the model that reduces the loss (i.e. follow the gradient). We do this by parameterizing the tree, then modify the parameters of the tree and move in the right direction by (reducing the residual loss.

Generally this approach is called functional gradient descent or gradient descent with functions.

One way to produce a weighted combination of classifiers which optimizes [the cost] is by gradient descent in function space

Boosting Algorithms as Gradient Descent in Function Space [PDF], 1999

 

The output for the new tree is then added to the output of the existing sequence of trees in an effort to correct or improve the final output of the model.

A fixed number of trees are added or training stops once loss reaches an acceptable level or no longer improves on an external validation dataset.

Improvements to Basic Gradient Boosting

Gradient boosting is a greedy algorithm and can overfit a training dataset quickly.

It can benefit from regularization methods that penalize various parts of the algorithm and generally improve the performance of the algorithm by reducing overfitting.

In this this section we will look at 4 enhancements to basic gradient boosting:

  1. Tree Constraints
  2. Shrinkage
  3. Random sampling
  4. Penalized Learning

1. Tree Constraints

It is important that the weak learners have skill but remain weak.

There are a number of ways that the trees can be constrained.

A good general heuristic is that the more constrained tree creation is, the more trees you will need in the model, and the reverse, where less constrained individual trees, the fewer trees that will be required.

Below are some constraints that can be imposed on the construction of decision trees:

  • Number of trees, generally adding more trees to the model can be very slow to overfit. The advice is to keep adding trees until no further improvement is observed.
  • Tree depth, deeper trees are more complex trees and shorter trees are preferred. Generally, better results are seen with 4-8 levels.
  • Number of nodes or number of leaves, like depth, this can constrain the size of the tree, but is not constrained to a symmetrical structure if other constraints are used.
  • Number of observations per split imposes a minimum constraint on the amount of training data at a training node before a split can be considered
  • Minimim improvement to loss is a constraint on the improvement of any split added to a tree.

2. Weighted Updates

The predictions of each tree are added together sequentially.

The contribution of each tree to this sum can be weighted to slow down the learning by the algorithm. This weighting is called a shrinkage or a learning rate.

Each update is simply scaled by the value of the “learning rate parameter v”

Greedy Function Approximation: A Gradient Boosting Machine [PDF], 1999

The effect is that learning is slowed down, in turn require more trees to be added to the model, in turn taking longer to train, providing a configuration trade-off between the number of trees and learning rate.

Decreasing the value of v [the learning rate] increases the best value for M [the number of trees].

Greedy Function Approximation: A Gradient Boosting Machine [PDF], 1999

It is common to have small values in the range of 0.1 to 0.3, as well as values less than 0.1.

Similar to a learning rate in stochastic optimization, shrinkage reduces the influence of each individual tree and leaves space for future trees to improve the model.

Stochastic Gradient Boosting [PDF], 1999

3. Stochastic Gradient Boosting

A big insight into bagging ensembles and random forest was allowing trees to be greedily created from subsamples of the training dataset.

This same benefit can be used to reduce the correlation between the trees in the sequence in gradient boosting models.

This variation of boosting is called stochastic gradient boosting.

at each iteration a subsample of the training data is drawn at random (without replacement) from the full training dataset. The randomly selected subsample is then used, instead of the full sample, to fit the base learner.

Stochastic Gradient Boosting [PDF], 1999

A few variants of stochastic boosting that can be used:

  • Subsample rows before creating each tree.
  • Subsample columns before creating each tree
  • Subsample columns before considering each split.

Generally, aggressive sub-sampling such as selecting only 50% of the data has shown to be beneficial.

According to user feedback, using column sub-sampling prevents over-fitting even more so than the traditional row sub-sampling

XGBoost: A Scalable Tree Boosting System, 2016

4. Penalized Gradient Boosting

Additional constraints can be imposed on the parameterized trees in addition to their structure.

Classical decision trees like CART are not used as weak learners, instead a modified form called a regression tree is used that has numeric values in the leaf nodes (also called terminal nodes). The values in the leaves of the trees can be called weights in some literature.

As such, the leaf weight values of the trees can be regularized using popular regularization functions, such as:

  • L1 regularization of weights.
  • L2 regularization of weights.

The additional regularization term helps to smooth the final learnt weights to avoid over-fitting. Intuitively, the regularized objective will tend to select a model employing simple and predictive functions.

XGBoost: A Scalable Tree Boosting System, 2016

Gradient Boosting Resources

Gradient boosting is a fascinating algorithm and I am sure you want to go deeper.

This section lists various resources that you can use to learn more about the gradient boosting algorithm.

Gradient Boosting Videos

Gradient Boosting in Textbooks

Gradient Boosting Papers

Gradient Boosting Slides

Gradient Boosting Web Pages

Summary

In this post you discovered the gradient boosting algorithm for predictive modeling in machine learning.

Specifically, you learned:

  • The history of boosting in learning theory and AdaBoost.
  • How the gradient boosting algorithm works with a loss function, weak learners and an additive model.
  • How to improve the performance of gradient boosting with regularization.

Do you have any questions about the gradient boosting algorithm or about this post? Ask your questions in the comments and I will do my best to answer.

Discover The Algorithm Winning Competitions!

XGBoost With Python

Develop Your Own XGBoost Models in Minutes

...with just a few lines of Python

Discover how in my new Ebook:
XGBoost With Python

It covers self-study tutorials like:
Algorithm Fundamentals, Scaling, Hyperparameters, and much more...

Bring The Power of XGBoost To Your Own Projects

Skip the Academics. Just Results.

See What's Inside

88 Responses to A Gentle Introduction to the Gradient Boosting Algorithm for Machine Learning

  1. Avatar
    Machine Learner March 21, 2017 at 2:35 am #

    Great review!!

  2. Avatar
    Rishabh Deshmukh March 26, 2017 at 3:37 pm #

    An extremely intuitive introduction to Gradient Boosting.

    • Avatar
      Jason Brownlee March 27, 2017 at 7:51 am #

      Thanks Rishabh.

      • Avatar
        Steve T. P. October 3, 2021 at 7:40 am #

        Great thorough overview, Jason!

        • Avatar
          Adrian Tam October 6, 2021 at 7:31 am #

          Thanks! Glad you like it.

  3. Avatar
    Sasikanth June 13, 2017 at 7:32 pm #

    Simply Superb.

  4. Avatar
    Bin September 12, 2017 at 1:35 am #

    Thanks Jason!

  5. Avatar
    Rob October 19, 2017 at 9:32 pm #

    Hi Jason,

    Many thanks for this post, learned a lot. I still have one thing I don’t fully grasp though. Does Gradient Tree Boosting only fit a decision tree to the original data once? And then adds new trees to the residuals of the first tree?

    Or does it do both, fitting multiple trees to the original data (as random forest does) and then for each tree fit new trees to it’s residuals?

    • Avatar
      Jason Brownlee October 20, 2017 at 5:33 am #

      Great question.

      Both. Tress use residual error to weight the data that new trees then fit.

      • Avatar
        Devendra October 27, 2017 at 9:02 pm #

        Hi Jason, Thanks for the really detailed post on Boosting.

        My question is mostly continuation of what Rob had asked. Each tree (Weak Learners) that is generated based on the sub samples of the learn data that we have considered? if not, is it only based on the residual error or log loss function (in case of Classification problem)? Would i like to understand the mechanism behind generating so many weak learners?
        A little elaborated answer will be of great of help in this regard.

        • Avatar
          Jason Brownlee October 28, 2017 at 5:13 am #

          Each subsequent weak learner is developed on the same data, rescaled by the errors made by the previous weak learner.

      • Avatar
        Deepu June 16, 2018 at 12:41 am #

        Hey Jason, i hava a question, for a thesis about Grandient Boosting i need to know:

        – consumption of resources and
        -rate/quickness

        But i havent found it. And i even not really understand it. Can you please help me?

        • Avatar
          Jason Brownlee June 16, 2018 at 7:28 am #

          You could design an experiment to evaluate these factors.

  6. Avatar
    Rida February 15, 2018 at 8:25 pm #

    Thank you very much for this excellent review

  7. Avatar
    ikoke March 19, 2018 at 6:19 am #

    Really great article, Jason!

    I am a bit confused about one thing-
    the Loss Function you mention in point 1 of the 3 components of Gradient Boosting, is that the loss function of each individual weak learner ? Like error = sum(w(i) * terror(i)) / sum(w), for AdaBoost ?

    Or is it is the loss function of the whole ensemble ?

  8. Avatar
    Programmer March 30, 2018 at 1:41 am #

    hi,
    I have fitted a linear model for my data.
    I want to take the residuals and initialize the Gradient Boosting for regression with thoes residuals. how do that theoretically and in code?
    thanks

    • Avatar
      Jason Brownlee March 30, 2018 at 6:41 am #

      Interesting ideas.

      You will have to code this yourself from scratch I’m afraid. Or pay someone to code it for you.

  9. Avatar
    Helen Z July 29, 2018 at 11:22 am #

    Hi Jason, for the penalized gradient boosting, L1 or L2 regularization, how do we do that? Which parameter to specify that?

  10. Avatar
    Mitchell August 2, 2018 at 10:37 am #

    Hi Jason,

    I’m curious if you have any experience with doing feature selection before running a Gradient Boosting Algorithm. I’ve read that doing prior feature selection can improve predictions but I don’t understand why. Wondering if you’re able to shed any light on this subject? No worries if not.

    • Avatar
      Jason Brownlee August 2, 2018 at 2:10 pm #

      Generally, boosted and bagged trees are good at picking out the features that are needed.

      Effort might be better spent on feature engineering instead. E.g. throw everything you can think of at the model and let it pick out what is predictive.

  11. Avatar
    Mitchell August 3, 2018 at 3:46 am #

    Thanks for the quick reply Jason! I agree completely! Why limit the amount of predictors the algorithm can choose from, Doesn’t make much sense to me!

    • Avatar
      Andrew August 16, 2020 at 3:46 pm #

      Hi Mitchell, Jason.
      I think there can be reasons not to throw everything at the algorithm (though I am an inexperienced user). In the case that you have a large number of features, and there’s a chance they are collinear, wouldn’t you be better to filter them through (e.g.) variance inflation factor analysis?
      Or would you call this feature engineering?
      Cheers,
      Andrew.

  12. Avatar
    Luke August 31, 2018 at 9:13 pm #

    Hi in Python, there is a function ‘sample_weight’ when calling the fit proceedure. Do you know if this is where the model is penalising a class or is it changing the data samples fed into the trees.
    Thanks

  13. Avatar
    Hunter Stevenson September 6, 2018 at 8:42 am #

    This was great. I’ve been searching for a decent Gradient Boosting Algorithm review and this was by far the most concise, easy to understand overview I’ve found.

  14. Avatar
    Satish October 25, 2018 at 5:16 am #

    Nice explntn

  15. Avatar
    Ujwal Gattupalli November 9, 2018 at 4:09 pm #

    Extremely helpful!

  16. Avatar
    Ashutosh November 20, 2018 at 4:03 pm #

    There’s a typo in the quote ” The idea is to used the weak learning method several times to get a succession of hypotheses, each one refocused on the examples that the previous ones found difficult and misclassified. … Note, however, it is not obvious at all how this can be done”

    I think it should be “use” instead of “used”

  17. Avatar
    Karan December 30, 2018 at 4:21 pm #

    How exactly does gradient boosting work in classification setting?

    • Avatar
      Jason Brownlee December 31, 2018 at 6:06 am #

      The algorithm creates an ensemble of boosted classification trees.

  18. Avatar
    Mithu January 9, 2019 at 3:00 am #

    Awesome review. Keep going!

  19. Avatar
    Julia January 18, 2019 at 8:25 pm #

    Hi, Jason,
    Thank you so much for this review.
    It really helps.
    Although I’ve read the whole text, all your questions and answers, I’m still confusing about the growth of decision trees in GBM.
    My understanding is that decision trees in the GBM use the same independent variable set but different training datasets (randomly subset of all training data).
    I want to know what is the relationship between the newly added tree and those already existent trees? What are the criteria of stopping decision tree adding?
    Thank you!
    Julia

    • Avatar
      Jason Brownlee January 19, 2019 at 5:38 am #

      Not quite, trees are added sequentially to correct the predictions of prior trees. The are fit on the same data, only modified to focus attention on errors made by prior trees.

      A fixed number of trees is added and we specify this number as a hyperparameter.

      • Avatar
        Julia January 19, 2019 at 8:09 pm #

        Jason,

        Thank you so much!

        I still have a question about “the fixed number of trees”. If a fixed number of trees have been added, but the prediction residuals are still not satisfactory, what will be do? Add more trees?

        Julia

  20. Avatar
    pheau March 21, 2019 at 10:46 pm #

    Hi Jason,

    I’ve read the entire article, but I’m not quite sure that I grasp the difference between GB and SGB (Gradient Boosting vs Stochastic Gradient Boosting).
    My understanding is that for GB we use the entire training set to train a tree and for SGB we have 3 options to subsample it and train the tree.
    Basically for GB we train trees and for SGB we train Random Forests?

    Regards,

    Vlad

    • Avatar
      Jason Brownlee March 22, 2019 at 8:27 am #

      Forests of trees in both cases, just we use sampling to increase the variance of the trees in SGB.

      • Avatar
        Martin Rosellen April 22, 2020 at 1:04 am #

        Hi Jason,

        very good article. I try to get my head around how forest are created for the GB case. Since every tree of a GB forest is build on the entire data set/uses the same data, wouldn’t the trees not all be the same?

        cheers
        Martin

        • Avatar
          Jason Brownlee April 22, 2020 at 6:00 am #

          Thanks.

          No, they attempt to correct the predictions of trees before them in the sequence.

          • Avatar
            Martin Rosellen April 22, 2020 at 8:22 pm #

            Ahh, thanks. I actually thought that forests of forests are build. Now it is clear.

            Thanks Jason

          • Avatar
            Jason Brownlee April 23, 2020 at 6:04 am #

            You’re welcome.

  21. Avatar
    pheau March 22, 2019 at 6:51 pm #

    Got it Jason, it makes sense now.
    Thanks a lot.

    Regards,

    Vlad

  22. Avatar
    Purna April 2, 2019 at 6:59 pm #

    Hi,

    I have a small doubt. In Gradient Boosting algorithm for estimating interval targets, why does the first predicted value is initialized with mean(y) ?

    Please help

    • Avatar
      Jason Brownlee April 3, 2019 at 6:41 am #

      Sorry, I don’t have example of prediction intervals for gradient boosting.

  23. Avatar
    mrxc April 16, 2019 at 6:18 pm #

    Hi jason. Great article, Can you please explain the usability of this algortithm i.e Gradient Boosting for dealing with catogorical data

    many thanks 🙂

    • Avatar
      Jason Brownlee April 17, 2019 at 6:57 am #

      A good starting point would be to integer or one hot encode the categorical variable.

  24. Avatar
    Pcpea May 27, 2019 at 9:22 pm #

    Thanks a lot, this is exactly what I need to understand the conceipt of GBM.

  25. Avatar
    JJ July 2, 2019 at 7:36 pm #

    Thanks a lot!.

    Is there an way to predict the direction of each feature especially in GBM classification?
    Variable importance does not tell the direction, positive or negative.

  26. Avatar
    Radha Subramanian September 20, 2019 at 5:32 pm #

    What a brilliant article Jason. Crystal clear explanation. Thanks a lot.

    • Avatar
      Jason Brownlee September 21, 2019 at 6:45 am #

      Thanks, I’m happy that you found it useful.

  27. Avatar
    Sujit Jena September 26, 2019 at 11:27 pm #

    3. Stochastic Gradient Boosting
    A big insight into bagging ensembles and random forest was allowing trees to be greedily created from subsamples of the training dataset.

    This same benefit can be used to reduce the correlation between the trees in the sequence in gradient boosting models.

    This variation of boosting is called stochastic gradient boosting.

    at each iteration a subsample of the training data is drawn at random (without replacement) from the full training dataset. The randomly selected subsample is then used, instead of the full sample, to fit the base learner.

    — Stochastic Gradient Boosting [PDF], 1999

    A few variants of stochastic boosting that can be used:

    Subsample rows before creating each tree.
    Subsample columns before creating each tree
    Subsample columns before considering each split.
    Generally, aggressive sub-sampling such as selecting only 50% of the data has shown to be beneficial.

    According to user feedback, using column sub-sampling prevents over-fitting even more so than the traditional row sub-sampling

    this section can also be reffred as bagging ?
    please correct me if wrong

    • Avatar
      Jason Brownlee September 27, 2019 at 8:02 am #

      Fitting greedy trees on subsets of rows of data is bagging.

  28. Avatar
    Han January 16, 2020 at 8:32 pm #

    Great introduction, any plan to write a python code from scratch for gbdt

  29. Avatar
    benjamin January 28, 2020 at 4:32 pm #

    Hello Jason.
    I would like you could clarify if xgboost is a differentiable or non-differentiable model. Are they an end-to-end trainable, and as such backpropagation can be applied on them when joining them with deep learning models, as deep learning classifiers?

    • Avatar
      Jason Brownlee January 29, 2020 at 6:29 am #

      Yes, the loss function is differentiable – that is the great benefit of the gradient boosting method – it can be fit using any differentiable loss function.

      The model is end-to-end trainable.

      Not sure it makes sense combining it with a neural net.

  30. Avatar
    Nate Anderson February 6, 2020 at 1:29 pm #

    Hi Jason,

    Thanks for the article
    This sentence confused me:
    ” gradient descent is used to minimize a set of parameters, such as the coefficients in a regression equation or weights in a neural network”

    The sentence suggests: ” gradient descent … minimizes … coefficients in a regression”; I thought gradient descent tries to minimize the cost/loss function. Specifically, gradient descent finds the values for coefficients *which minimize the value of the loss function*? In other words, a coefficient’s value may increase, even if it decreases the loss function.

    Maybe I don’t understand

    Thanks again,

    • Avatar
      Jason Brownlee February 6, 2020 at 1:47 pm #

      Sorry. Yes, gradient descent can be used to find coefficients in linear regression or find weights in a neural net by minimizing loss.

  31. Avatar
    Aditya Amrutiya March 18, 2020 at 2:53 am #

    Why does Gradient Boosting and XGBoost don’t work when we are doing multivariate regression? That is, I have 2 values to be predicted from given values

    • Avatar
      Jason Brownlee March 18, 2020 at 6:12 am #

      Perhaps try using the sklearn implementation – I think it supports multiple output regression.

  32. Avatar
    Vaishali Deshwal June 6, 2020 at 10:42 pm #

    how to know that for how many days GBR model predicts values? please explain

  33. Avatar
    Hari August 12, 2020 at 9:51 pm #

    Nice write-up. There is a typo. Probably Approximately Correct is written as Probability Approximately Correct.

  34. Avatar
    Puneet Tiwari September 4, 2020 at 9:34 pm #

    This is a great explanation.Very helpful.

    Thanks.

  35. Avatar
    sonal September 15, 2020 at 4:35 pm #

    hello Jason,

    Can you please explain the algorithm of Light GBM also in the same way.

  36. Avatar
    Sachini Chandanayake January 22, 2021 at 5:42 pm #

    Really nice explanation. Thank you!

  37. Avatar
    Tanuj Mathur January 25, 2021 at 11:18 pm #

    Hey Jason, great article.
    I have a doubt regarding the test and validation set for early stopping. Can we use cross-validation without early stopping for hyperparameter optimization and then use the test set for early stopping with the best-known hyperparameters? Will that affect the generalizability of the model since the test set is involved somehow during the training?

  38. Avatar
    Radhika March 21, 2021 at 4:56 am #

    Fantastic article for a beginner to understand gradient boosting, Thank you !

Leave a Reply