Gentle Introduction to the Bias-Variance Trade-Off in Machine Learning

Supervised machine learning algorithms can best be understood through the lens of the bias-variance trade-off.

In this post, you will discover the Bias-Variance Trade-Off and how to use it to better understand machine learning algorithms and get better performance on your data.

Kick-start your project with my new book Master Machine Learning Algorithms, including step-by-step tutorials and the Excel Spreadsheet files for all examples.

Let’s get started.

  • Update Oct/2019: Removed discussion of parametric/nonparametric models (thanks Alex).
Gentle Introduction to the Bias-Variance Trade-Off in Machine Learning

Gentle Introduction to the Bias-Variance Trade-Off in Machine Learning
Photo by Matt Biddulph, some rights reserved.

Overview of Bias and Variance

In supervised machine learning an algorithm learns a model from training data.

The goal of any supervised machine learning algorithm is to best estimate the mapping function (f) for the output variable (Y) given the input data (X). The mapping function is often called the target function because it is the function that a given supervised machine learning algorithm aims to approximate.

The prediction error for any machine learning algorithm can be broken down into three parts:

  • Bias Error
  • Variance Error
  • Irreducible Error

The irreducible error cannot be reduced regardless of what algorithm is used. It is the error introduced from the chosen framing of the problem and may be caused by factors like unknown variables that influence the mapping of the input variables to the output variable.

In this post, we will focus on the two parts we can influence with our machine learning algorithms. The bias error and the variance error.

Get your FREE Algorithms Mind Map

Machine Learning Algorithms Mind Map

Sample of the handy machine learning algorithms mind map.

I've created a handy mind map of 60+ algorithms organized by type.

Download it, print it and use it. 

Also get exclusive access to the machine learning algorithms email mini-course.



Bias Error

Bias are the simplifying assumptions made by a model to make the target function easier to learn.

Generally, linear algorithms have a high bias making them fast to learn and easier to understand but generally less flexible. In turn, they have lower predictive performance on complex problems that fail to meet the simplifying assumptions of the algorithms bias.

  • Low Bias: Suggests less assumptions about the form of the target function.
  • High-Bias: Suggests more assumptions about the form of the target function.

Examples of low-bias machine learning algorithms include: Decision Trees, k-Nearest Neighbors and Support Vector Machines.

Examples of high-bias machine learning algorithms include: Linear Regression, Linear Discriminant Analysis and Logistic Regression.

Variance Error

Variance is the amount that the estimate of the target function will change if different training data was used.

The target function is estimated from the training data by a machine learning algorithm, so we should expect the algorithm to have some variance. Ideally, it should not change too much from one training dataset to the next, meaning that the algorithm is good at picking out the hidden underlying mapping between the inputs and the output variables.

Machine learning algorithms that have a high variance are strongly influenced by the specifics of the training data. This means that the specifics of the training have influences the number and types of parameters used to characterize the mapping function.

  • Low Variance: Suggests small changes to the estimate of the target function with changes to the training dataset.
  • High Variance: Suggests large changes to the estimate of the target function with changes to the training dataset.

Generally, nonlinear machine learning algorithms that have a lot of flexibility have a high variance. For example, decision trees have a high variance, that is even higher if the trees are not pruned before use.

Examples of low-variance machine learning algorithms include: Linear Regression, Linear Discriminant Analysis and Logistic Regression.

Examples of high-variance machine learning algorithms include: Decision Trees, k-Nearest Neighbors and Support Vector Machines.

Bias-Variance Trade-Off

The goal of any supervised machine learning algorithm is to achieve low bias and low variance. In turn the algorithm should achieve good prediction performance.

You can see a general trend in the examples above:

  • Linear machine learning algorithms often have a high bias but a low variance.
  • Nonlinear machine learning algorithms often have a low bias but a high variance.

The parameterization of machine learning algorithms is often a battle to balance out bias and variance.

Below are two examples of configuring the bias-variance trade-off for specific algorithms:

  • The k-nearest neighbors algorithm has low bias and high variance, but the trade-off can be changed by increasing the value of k which increases the number of neighbors that contribute t the prediction and in turn increases the bias of the model.
  • The support vector machine algorithm has low bias and high variance, but the trade-off can be changed by increasing the C parameter that influences the number of violations of the margin allowed in the training data which increases the bias but decreases the variance.

There is no escaping the relationship between bias and variance in machine learning.

  • Increasing the bias will decrease the variance.
  • Increasing the variance will decrease the bias.

There is a trade-off at play between these two concerns and the algorithms you choose and the way you choose to configure them are finding different balances in this trade-off for your problem

In reality, we cannot calculate the real bias and variance error terms because we do not know the actual underlying target function. Nevertheless, as a framework, bias and variance provide the tools to understand the behavior of machine learning algorithms in the pursuit of predictive performance.

Further Reading

This section lists some recommend resources if you are looking to learn more about bias, variance and the bias-variance trade-off.


In this post, you discovered bias, variance and the bias-variance trade-off for machine learning algorithms.

You now know that:

  • Bias is the simplifying assumptions made by the model to make the target function easier to approximate.
  • Variance is the amount that the estimate of the target function will change given different training data.
  • Trade-off is tension between the error introduced by the bias and the variance.

Do you have any questions about bias, variance or the bias-variance trade-off. Leave a comment and ask your question and I will do my best to answer.

Discover How Machine Learning Algorithms Work!

Mater Machine Learning Algorithms

See How Algorithms Work in Minutes

...with just arithmetic and simple examples

Discover how in my new Ebook:
Master Machine Learning Algorithms

It covers explanations and examples of 10 top algorithms, like:
Linear Regression, k-Nearest Neighbors, Support Vector Machines and much more...

Finally, Pull Back the Curtain on
Machine Learning Algorithms

Skip the Academics. Just Results.

See What's Inside

109 Responses to Gentle Introduction to the Bias-Variance Trade-Off in Machine Learning

  1. Avatar
    Akash Ansari March 28, 2016 at 9:21 pm #

    I have just read a nice and delicate blogpost about bias-variance tradeoff. Looking forward to learn more Machine Learning Algorithms in a simpler fashion.

    • Avatar
      Vishnu August 14, 2019 at 4:35 am #

      Model which overfits more likely to have high or low ____
      Bias pr varience

      • Avatar
        Jason Brownlee August 14, 2019 at 6:44 am #


      • Avatar
        yicihe September 10, 2019 at 8:09 pm #


      • Avatar
        Akshay October 22, 2019 at 6:15 pm #

        High variance

      • Avatar
        satya May 22, 2020 at 10:37 pm #

        high variance

      • Avatar
        loco_ese July 26, 2022 at 9:49 pm #

        for overfitting models

        low biased and high variance

  2. Avatar
    R M Jain September 15, 2016 at 7:44 pm #

    Hi, I want to check bias-variance tradeoff for iris dataset.. Is anyone knows how to find it>?????????
    please tell the solution…

  3. Avatar
    R M Jain September 15, 2016 at 7:45 pm #

    Hi, I required to find using r programming functions.. please reply…

  4. Avatar
    R Karthik February 2, 2017 at 5:12 pm #

    There is a typo i guess…. High bias means more assumptions for the target function. Eg: Linear regression.

    But in the article it is specified as opposite,

    ” High-Bias: Suggests less assumptions about the form of the target function. “

  5. Avatar
    Fei Du February 25, 2017 at 12:31 pm #

    Hi, less assumptions probably mean less complex model, so I guess high-bias may suggest less complex model and less assumptions.

    • Avatar
      Jason Brownlee February 26, 2017 at 5:28 am #

      A high bias assumes a strong assumption or strong restrictions on the model.

      • Avatar
        Massi December 10, 2017 at 8:28 am #

        I agree with Fei Du

        • Avatar
          Arindam Paul April 25, 2019 at 12:28 pm #

          Fei Du and Massi are wrong. A simpler model is one with more assumptions. This is how a generalized linear model becomes linear when we remove non-linear terms. Less complex and fewer assumptions are opposite in this case. A linear model is a special case of a polynomial and thus puts more restrictions on the model.

          • Avatar
            Jason Brownlee April 25, 2019 at 2:42 pm #

            Model complexity is a sticky concept, there are many ways to measure it (e.g. freedom, capacity, parameters, etc).

            A high bias model can be complex or simple, complexity is a different axis of consideration.

  6. Avatar
    Soghra. July 6, 2017 at 4:27 am #

    This is perfect explanation. Thanks for your efforts.

  7. Avatar
    sam August 1, 2017 at 10:14 pm #

    Very simple explanation. thanks.

  8. Avatar
    Raj September 6, 2017 at 8:46 am #

    Good explanation, Thank you!

  9. Avatar
    Ashwin Agrawal October 13, 2017 at 10:12 pm #

    Nice work!!! I have one query if we decrease the variance then we observe the bias increases and vice versa, but is the rate of fall and rise of these parameters is same or constant or it is dependent of specific algorithms used. Can we tune a model based on bias-variance trade off?

    • Avatar
      Jason Brownlee October 14, 2017 at 5:45 am #

      They are tied together.

      Yes, one must tune the trade-off on a specific problem to get the right level of generalization required.

  10. Avatar
    soumya December 7, 2017 at 1:30 am #

    while designing any model which must be considered to minimize first bias or variance so as to get a better model ?

    • Avatar
      Jason Brownlee December 7, 2017 at 8:05 am #

      Perhaps start with something really high bias and slowly move toward higher variance?

  11. Avatar
    soumya December 7, 2017 at 1:35 am #

    Is there any limit or a scale to know the errors are minimum or maximum in bias and variance?

  12. Avatar
    Keval December 8, 2017 at 8:01 am #

    What are some the measures for understanding bias and variance in our model? How can we quantify bias-variance trade-off? Thanks.

    • Avatar
      Jason Brownlee December 8, 2017 at 2:28 pm #

      Good question, it may be possible for specific algorithms, such as in knn increasing k from 1 to n (number of patterns) and plotting model skill in a test set.

  13. Avatar
    Bilawal Wajid Ali January 25, 2018 at 11:21 am #


    I guess this post is too old , but still I am giving a try may be if you can answer my question it would be helpful for me

    What role exactly Bias and variance could play on the result for training data

    what is actually means by the trade-off between variance and bias
    how it could effect the approach overall


    • Avatar
      Jason Brownlee January 26, 2018 at 5:36 am #

      It is an abstract idea to help understand how machine learning algorithms work in general. The tension that exists between prior knowledge/bias and learning from data/variance.

  14. Avatar
    Manish Sihag February 14, 2018 at 11:08 am #

    Hello, I am still a little confused about this. Please help me out by reading the following situation.

    Suppose there are 5 parties standing in election and we want to make prediction beforehand about who will win. We choose 5 people from a community and ask them separately, the name of the party for which they are going to vote. Suppose all 5 of them chooses all 5 different parties. If we treat each person as a machine learning model, their answer as training data and make predictions accordingly then for all 5 models, we will end up predicting 5 different outputs. This makes it a high variance case because results are varying and high bias because we chose only a particular type of people (all are from a particular community). Next if we choose 1000 people for poll and there are suppose 200 people for each party. Now if we treat all 1000 people as model, we will have 200 predictions for each party. We can call it a low bias because we chose a larger group now and hence they are more representative of their population but this is still high variance, right? Because there are still equally varied results. Lastly, if 700 of those people chose one party (and that party actually wins) and rest 300 are distributed in other parties, is this what we call low variance? What will we call it if that party loses?

    Also let me know if I made any incorrect remarks.

    • Avatar
      Jason Brownlee February 14, 2018 at 2:42 pm #

      There would be some variance, but why high variance? If they are from the same geographical region, watch the same news, then the bias may be high but the variance low.

      Also, if there are only 2 choices, e.g. binary, then things like bias and variance don’t mean much. Perhaps you want an example such as guessing the number of coins in a jar?

      • Avatar
        Jeff Nyman June 14, 2018 at 5:44 am #

        A question on this statement. You are saying if there are only two choices, then bias and variance don’t mean much. Do you mean two choices as in the outputs? The reason I’m asking is for various sentiment analysis ideas, wherein you have two choices (outputs): 0 or 1. Basically “does not predict” or “does predict.”

        But it would seem that in such binary situations, bias could creep in. Such as in data that uses movie reviews, wherein a bias may creep in if the word “bad” was previously only used in clearly negative reviews. But then a new review comes in that says “Wanted to see this movie really bad. Not disappointed.” Here the use of “bad” is not in a negative context, but might be subject to bias.

        Am I just totally missing the point of your comment?

  15. Avatar
    Manish Sihag February 15, 2018 at 12:51 am #

    Yes, I suppose.The reason I am saying it a high variance is because high variance is the spread of predictions by different models from target output. And in this case out of 1000 only 200 will be on target and rest 800 varied. But probably I am getting it all wrong, I will think about it some more. Thanks for the quick reply.

    • Avatar
      Deb October 13, 2019 at 6:36 pm #

      Is there a situation when there is high variance and high bias

      • Avatar
        Jason Brownlee October 14, 2019 at 8:05 am #

        No really, recall the terms are balanced.

        High bias will reduce the variance.
        High variance will reduce the bias.

  16. Avatar
    Doug March 9, 2018 at 7:17 am #

    Is this bias concept different than the bias added into neural network model y=mx + b(bias of 1)?

    • Avatar
      Jason Brownlee March 10, 2018 at 6:11 am #

      Yes, different concepts. The bias in a model is a specific manipulation of the model, the bias in the tradeoff is an abstract concept regarding model behavior.

  17. Avatar
    mars March 29, 2018 at 6:35 pm #

    Does the variance here correspond to the RMSE ?

    • Avatar
      Jason Brownlee March 30, 2018 at 6:34 am #

      No, but perhaps the variance in RMSE when the model is trained on different samples of training data (e.g. variance in skill given initial conditions).

  18. Avatar
    Bibek Subedi May 8, 2018 at 4:00 am #

    Hi Jason,

    Your explanation is clear and concise. A nice thing about you is you make complex math heavy topics simple and easier to understand.

  19. Avatar
    Igor June 6, 2018 at 2:03 am #

    Hi Jason,

    Is my understanding correct regarding the bias?

    Bias is when we assume certain things about the training data (it’s shape for example) and we choose a model accordingly. But, then we get predictions far away from the exptected values and we realise that we did a mistake in certain assumptions of our training data?

  20. Avatar
    Yongtao Ding July 20, 2018 at 5:55 am #

    Hi Jason,

    This post is clear and easy to understand. I have one question regarding your statement about how SVM manages variance issue. As you said in this article, through increasing penalty parameter C, SVM could decrease its variance.

    From my perspective, C is the penalty parameter, and it is different from the regularization lambda, through decreasing C, we could narrow the margin, and the learner could go a little underfitting, which would decrease the variance.

    Look forward to hearing from you!


    • Avatar
      Jason Brownlee July 20, 2018 at 6:21 am #

      Not sure, I need to think about it.

      Perhaps run some experiments to confirm your framing.

  21. Avatar
    Yongtao Ding July 20, 2018 at 6:02 am #

    just wanna check if my comment is submitted

  22. Avatar
    Khalid Usman November 8, 2018 at 1:14 am #

    Dear @Jason Brownlee : Thanks alot for another nice and easy tutorial.

    So from the description can i say, that Linear algorithms (Linear/Logistic Regression & LDA) will only under-fit and never face over-fitting problem, because you said that these algorithms have high bias and low variance and vice-versa for non-linear problems (decision tree, KNN, SVM)

  23. Avatar
    Thahimum Hassan December 25, 2018 at 5:13 am #

    Hi Jason,

    I am struggling to calculate the bias/discrimination of the ‘Adult Dataset’, downloaded from the UCI machine learning repository. Do you know how to calculate the discrimination using a matrix? Thanks in advance.

  24. Avatar
    Akash Dubey January 14, 2019 at 1:44 am #

    I have one question though : For a model total error is calculated as

    Total error = (Bias)^2 + Variance + Irreducible error

    Why do we take (Bias)^2 for the calculation, why not just Bias ?

    • Avatar
      Jason Brownlee January 14, 2019 at 5:30 am #

      If bias is the variance then the units are squared units. It is a notation for the units.

  25. Avatar
    Sandeep January 18, 2019 at 1:31 am #

    Hi Jason,

    Thank you very much for your post. It helped me understanding variance and Bias in model.

    How do we identify the bias and variance when we apply Random Forest or Logistic regression? I mean what type of graph should I perform to check it? What is x-axis and what is Y-axis?

    Thank you for your help on this.


    • Avatar
      Jason Brownlee January 18, 2019 at 5:43 am #

      Generally, we don’t. It is a concept to help understand the model behavior.

  26. Avatar
    Yousuf Azad March 1, 2019 at 8:35 pm #

    Feedback : Few images would have helped a lot

    • Avatar
      Jason Brownlee March 2, 2019 at 9:32 am #

      Thanks for the suggestion.

      What would you like to see images of exactly?

  27. Avatar
    e.khalili May 5, 2019 at 6:03 am #

    your page dont have any mathematical bias.
    please improve …

  28. Avatar
    Anshu September 4, 2019 at 8:17 pm #

    what are the methods to calculate bias and variance in training data set

    • Avatar
      Jason Brownlee September 5, 2019 at 6:53 am #

      It is a theoretical concept for understanding the challenge of predictive modeling.

  29. Avatar
    Ishan Shrivastava September 13, 2019 at 5:37 am #

    Hi Jason,
    You are really awesome and your blogs and tutorials have really helped me a lot through my journey in this field. I would be grateful if you can clarify a doubt that a got reading through this blog. You mention how increasing C parameter in SVM would lead to an increase in the Bias and decrease in the variance. This is what is not absolutely clear to me.

    I will try to present my understanding to explain why I am confused and would love it if you can correct/clarify my understanding.

    I believe the C parameter suggests how much we want to avoid misclassifying each training example. Therefore, a large C value will force the optimization to choose a smaller margin. A smaller margin would then lead to overfitting, resulting in a Low bias and a high variance.

    Based on this understanding of the C parameter that I have, I am finding it hard to understand when you suggest that increasing the C parameter would lead to an increase in the Bias and decrease in the variance. I am sure I must be missing something and therefore I would love it if you can provide me with the missing pieces and help me understand this well.

    Thanks again. Hoping to hear from you soon,

    • Avatar
      Jason Brownlee September 13, 2019 at 5:48 am #


      No, larger C is more bias – more constraint – more overfitting – more prior.

      Does that help?

  30. Avatar
    Alex October 25, 2019 at 1:06 am #

    How is KNN a parametric model? It doesn’t assume any prior on the data? Or you mean parametric in the sense of fixed number of parameters?

    • Avatar
      Jason Brownlee October 25, 2019 at 6:41 am #

      Thanks. I probably meant non-linear. I will update it.

      UPDATED. Thanks again.

  31. Avatar
    Lavanya November 11, 2019 at 7:11 pm #

    How bias and variance will effect on Number of hidden neurons in a neural network?

    • Avatar
      Jason Brownlee November 12, 2019 at 6:36 am #

      More nodes is probably a larger bias, lower variance.

      • Avatar
        Todd Phillips January 18, 2020 at 11:05 am #

        Could you please clarify this answer? My understanding was that more nodes -> more model flexibility / fewer model assumptions about target function -> higher dependence of parameters on training data -> higher variance, lower bias. Could you point out the flaw in my reasoning?

  32. Avatar
    Amisha November 30, 2019 at 6:14 pm #

    Great post, very simplistic yet informative. Thanks 🙂

  33. Avatar
    Ashwani Varshney April 1, 2020 at 12:41 am #

    I am trying to fit a linear model on a non-linear function. For Example I have chosen x^3+x^2-5 as my non-linear function and I am fitting linear regression model. I have created ensembles of different sizes let say 10,20,30,……100. So my graph is against the ensemble size. So the bias is decreasing and variance is increasing as the ensemble size is increasing. What can I interpret from this about the function which I took.

    • Avatar
      Jason Brownlee April 1, 2020 at 5:52 am #

      Perhaps a linear model is inappropriate for your quadratic function?

  34. Avatar
    Naeem Naseer June 27, 2020 at 7:42 pm #

    In machine learning data are statistically independent mean that ..??
    A) Higher Variance B) High correlation C) Low correlation D) Lower Bias

    • Avatar
      Jason Brownlee June 28, 2020 at 5:47 am #

      Not sure those concepts are connected directly are they?

  35. Avatar
    M Nouman September 10, 2020 at 5:34 pm #

    overfitting underfitting, and bias-variance trade-off Are these two the same concept or is there any difference between them.

  36. Avatar
    sandeep September 21, 2020 at 4:53 pm #

    Increasing Training points (input data) reduces variance without any effect on bias.

    So how is the below relationship valid :

    Increasing the bias will decrease the variance.
    Increasing the variance will decrease the bias.

    • Avatar
      Jason Brownlee September 22, 2020 at 6:42 am #

      Not sure I agree with your premise.

      Yes, it is easy to decrease one by increasing the other, but this is not the only dynamic.

  37. Avatar
    sandeep September 21, 2020 at 4:56 pm #

    in K-fold validation , increasing k does it increase error estimates variance or decrease model variance. Why do we say LOOCV has high variance and not better than k-fold

    • Avatar
      Jason Brownlee September 22, 2020 at 6:42 am #

      I don’t agree with your statement.

      Also, generally LOOCV has lower variance than higher k values.

  38. Avatar
    Harshit April 14, 2021 at 3:35 am #

    “Generally, nonlinear machine learning algorithms that have a lot of flexibility have a high variance. For example, decision trees have a high variance, that is even higher if the trees are not pruned before use.”

    This is mentioned in the 3rd section. Isnt is supposed to be that a algorithm that has less flexibility will have high variance because any model which has adapted well to a training set will perform bad on another set(overfitting basically) so in turn will have less flexibility.

    • Avatar
      Jason Brownlee April 14, 2021 at 6:28 am #

      An algorithm with less flexibility will have a high bias and a low variance.

  39. Avatar
    Brindha Sivashanmugam May 15, 2021 at 5:49 am #

    Simple and great explanation. Thank you for such a nice article. Your posts are very detailed and informative.

  40. Avatar
    Pranav Jain October 20, 2021 at 3:07 pm #

    Thank you for article

  41. Avatar
    Benjamin December 2, 2021 at 1:00 am #

    Hi Jason,

    would you say the following statement is true: Regularization decreases variance at the cost of introducing bias?

    Regularization allows to learn a pattern from a training data set that is too small or noisy for the intended model. So I apply regularization, to decrease the fit of the model to the training data (higher bias) but I aim to gain higher generalizablity by not overfitting (lower variance, i.e. the model is more hesitant to follow patterns that might just be noise)

    • Avatar
      Adrian Tam December 2, 2021 at 3:02 am #


  42. Avatar
    Paolo December 12, 2021 at 11:45 pm #

    Hi Jason, thank you so much for this article. One thing that is not clear to me is the fact that Bias decrease as Variance increase.

    I came up with an example in my head and I would love to hear your opinion about it:
    If for example my data came from Y = f(X) + e where f : x^2, I choose to model it by using polynomial regression and now I have to choose the degree of the polynomials K.

    In my mind I think that a even value of K, will better fit my data because a polynomial of even degree has the same concavity of a quadratic function and so the behavior at the extreme is the same (I dont know if its the right term concavity, what I mean is that the behavior for x-> +inf and x->-inf is the same for even polynomials), and so I think they better fit my original function f(x) = x^2.
    If my example is correct, in my mind I think that a polynomial of degree K=4 will have less Bias than a polynomial of degree K =3 , while having more complexity and (I think) more Variance

    • Avatar
      Adrian Tam December 15, 2021 at 5:57 am #

      Not really. Because K=4 is a superset of K=3. You may end up finding Ax^4+Bx^3+Cx^2+Dx+E with the A very close to zero.

      • Avatar
        Paolo December 16, 2021 at 3:59 am #

        Ok Adrian I think now I got it thanks to your comment: what we mean by increasing complexity of the model is by adding more parameters to the previous version of our model (in my example, increasing the degree of the polynomial, i.e. adding more term to our polynomial function), in this way we always create a super model that contain all the previous one. In this way the Bias can only decrease as we add more term to the model, since we always “have” the previous version of the models incorporated in the new model (in my example, we have as possibilities all the polynomials with degree lower than K).
        Thank you so much Adrian

        • Avatar
          Adrian Tam December 17, 2021 at 7:14 am #

          You’re welcomed.

  43. Avatar
    Evan March 25, 2022 at 4:35 pm #

    One quick clarification question. Does the bias in this context mean: (a) For a prediction model and a training dataset, the mean of the prediction errors (where we allow the positives and negatives to cancel out, of course) over all samples in the data (b) The mean over the mean errors of the prediction model produced in (a) with many different training datasets, or both?

    I believe the analogy applies to variance as well (Correct me if I am wrong)

  44. Avatar
    Saumya Mundra April 17, 2022 at 7:26 am #

    ‘Variance is the amount that the estimate of the target function will change if different training data was used.’ In the given statement, what does ‘different training set’ mean ? Can the different training sets have different features or is it just a different set of rows from the previously(originally) selected features.

    • Avatar
      James Carmichael April 17, 2022 at 7:47 am #

      Hi Saumya…A different training set could be either of the scenarios you mentioned and in fact it would be beneficial if training were done on such a variation.

  45. Avatar
    Surender Singh April 30, 2022 at 1:34 am #

    There are three cases:
    1. Overfitting
    Training loss- near 0 for every set of train test split
    Test loss- high variance for different train test split
    Bias- is usually tied with training loss-, if training loss is less then it is a case of low bias as per the definition of bias. So in overfitting bias is low
    Variance- high

    2. Underfitting
    Training loss- high for every set of train test split
    Test loss- high for every set of train test split
    Bias- high
    Variance- low because test loss is high for different set

    3. Balance fit

    Training loss- low for every set of train test split
    Test loss- low for every set of train test split
    Bias- low
    Variance- low because test loss is low for different set

    So from above discussion it seems that bias and variance not quite opposite to each other but they are related to somewhat due to training and test data relationship.

    • Avatar
      James Carmichael April 30, 2022 at 10:17 am #

      Great feedback Surender!

  46. Avatar
    ASHWIN SHETGAONKAR June 20, 2022 at 1:11 pm #

    for svm c is inversely proportional to the strength of regularization.

  47. Avatar
    sam July 23, 2022 at 7:02 am #

    Discuss bias-variance tradeoffs in (i) Centroid-based classifier, (ii) kNN
    classifier, and (iii) SVM classifier in terms of the nature of class separation
    boundaries and the data points in the training set that can effect them.

  48. Avatar
    sam July 23, 2022 at 7:02 am #

    Discuss bias-variance tradeoffs in (i) Centroid-based classifier, (ii) kNN
    classifier, and (iii) SVM classifier in terms of the nature of class separation
    boundaries and the data points in the training set that can effect them.
    please answer this question?

  49. Avatar
    Akash October 10, 2022 at 8:59 pm #

    H james,

    This might be a stupid question but it’s struck the back of my mind for so long, so here goes. Does the variance in bias-variance trade-off and variance in statistics (which measures variability from the average or mean ) are different? Greatly appreciate your help.

  50. Avatar
    vivek February 18, 2023 at 4:23 am #


    • Avatar
      James Carmichael February 18, 2023 at 9:18 am #

      You are very welcome vivek!

Leave a Reply