A Gentle Introduction to k-fold Cross-Validation

Cross-validation is a statistical method used to estimate the skill of machine learning models.

It is commonly used in applied machine learning to compare and select a model for a given predictive modeling problem because it is easy to understand, easy to implement, and results in skill estimates that generally have a lower bias than other methods.

In this tutorial, you will discover a gentle introduction to the k-fold cross-validation procedure for estimating the skill of machine learning models.

After completing this tutorial, you will know:

  • That k-fold cross validation is a procedure used to estimate the skill of the model on new data.
  • There are common tactics that you can use to select the value of k for your dataset.
  • There are commonly used variations on cross-validation such as stratified and repeated that are available in scikit-learn.

Discover statistical hypothesis testing, resampling methods, estimation statistics and nonparametric methods in my new book, with 29 step-by-step tutorials and full source code.

Let’s get started.

A Gentle Introduction to k-fold Cross-Validation

A Gentle Introduction to k-fold Cross-Validation
Photo by Jon Baldock, some rights reserved.

Tutorial Overview

This tutorial is divided into 5 parts; they are:

  1. k-Fold Cross-Validation
  2. Configuration of k
  3. Worked Example
  4. Cross-Validation API
  5. Variations on Cross-Validation

Need help with Statistics for Machine Learning?

Take my free 7-day email crash course now (with sample code).

Click to sign-up and also get a free PDF Ebook version of the course.

Download Your FREE Mini-Course

k-Fold Cross-Validation

Cross-validation is a resampling procedure used to evaluate machine learning models on a limited data sample.

The procedure has a single parameter called k that refers to the number of groups that a given data sample is to be split into. As such, the procedure is often called k-fold cross-validation. When a specific value for k is chosen, it may be used in place of k in the reference to the model, such as k=10 becoming 10-fold cross-validation.

Cross-validation is primarily used in applied machine learning to estimate the skill of a machine learning model on unseen data. That is, to use a limited sample in order to estimate how the model is expected to perform in general when used to make predictions on data not used during the training of the model.

It is a popular method because it is simple to understand and because it generally results in a less biased or less optimistic estimate of the model skill than other methods, such as a simple train/test split.

The general procedure is as follows:

  1. Shuffle the dataset randomly.
  2. Split the dataset into k groups
  3. For each unique group:
    1. Take the group as a hold out or test data set
    2. Take the remaining groups as a training data set
    3. Fit a model on the training set and evaluate it on the test set
    4. Retain the evaluation score and discard the model
  4. Summarize the skill of the model using the sample of model evaluation scores

Importantly, each observation in the data sample is assigned to an individual group and stays in that group for the duration of the procedure. This means that each sample is given the opportunity to be used in the hold out set 1 time and used to train the model k-1 times.

This approach involves randomly dividing the set of observations into k groups, or folds, of approximately equal size. The first fold is treated as a validation set, and the method is fit on the remaining k − 1 folds.

— Page 181, An Introduction to Statistical Learning, 2013.

It is also important that any preparation of the data prior to fitting the model occur on the CV-assigned training dataset within the loop rather than on the broader data set. This also applies to any tuning of hyperparameters. A failure to perform these operations within the loop may result in data leakage and an optimistic estimate of the model skill.

Despite the best efforts of statistical methodologists, users frequently invalidate their results by inadvertently peeking at the test data.

— Page 708, Artificial Intelligence: A Modern Approach (3rd Edition), 2009.

The results of a k-fold cross-validation run are often summarized with the mean of the model skill scores. It is also good practice to include a measure of the variance of the skill scores, such as the standard deviation or standard error.

Configuration of k

The k value must be chosen carefully for your data sample.

A poorly chosen value for k may result in a mis-representative idea of the skill of the model, such as a score with a high variance (that may change a lot based on the data used to fit the model), or a high bias, (such as an overestimate of the skill of the model).

Three common tactics for choosing a value for k are as follows:

  • Representative: The value for k is chosen such that each train/test group of data samples is large enough to be statistically representative of the broader dataset.
  • k=10: The value for k is fixed to 10, a value that has been found through experimentation to generally result in a model skill estimate with low bias a modest variance.
  • k=n: The value for k is fixed to n, where n is the size of the dataset to give each test sample an opportunity to be used in the hold out dataset. This approach is called leave-one-out cross-validation.

The choice of k is usually 5 or 10, but there is no formal rule. As k gets larger, the difference in size between the training set and the resampling subsets gets smaller. As this difference decreases, the bias of the technique becomes smaller

— Page 70, Applied Predictive Modeling, 2013.

A value of k=10 is very common in the field of applied machine learning, and is recommend if you are struggling to choose a value for your dataset.

To summarize, there is a bias-variance trade-off associated with the choice of k in k-fold cross-validation. Typically, given these considerations, one performs k-fold cross-validation using k = 5 or k = 10, as these values have been shown empirically to yield test error rate estimates that suffer neither from excessively high bias nor from very high variance.

— Page 184, An Introduction to Statistical Learning, 2013.

If a value for k is chosen that does not evenly split the data sample, then one group will contain a remainder of the examples. It is preferable to split the data sample into k groups with the same number of samples, such that the sample of model skill scores are all equivalent.

Worked Example

To make the cross-validation procedure concrete, let’s look at a worked example.

Imagine we have a data sample with 6 observations:

The first step is to pick a value for k in order to determine the number of folds used to split the data. Here, we will use a value of k=3. That means we will shuffle the data and then split the data into 3 groups. Because we have 6 observations, each group will have an equal number of 2 observations.

For example:

We can then make use of the sample, such as to evaluate the skill of a machine learning algorithm.

Three models are trained and evaluated with each fold given a chance to be the held out test set.

For example:

  • Model1: Trained on Fold1 + Fold2, Tested on Fold3
  • Model2: Trained on Fold2 + Fold3, Tested on Fold1
  • Model3: Trained on Fold1 + Fold3, Tested on Fold2

The models are then discarded after they are evaluated as they have served their purpose.

The skill scores are collected for each model and summarized for use.

Cross-Validation API

We do not have to implement k-fold cross-validation manually. The scikit-learn library provides an implementation that will split a given data sample up.

The KFold() scikit-learn class can be used. It takes as arguments the number of splits, whether or not to shuffle the sample, and the seed for the pseudorandom number generator used prior to the shuffle.

For example, we can create an instance that splits a dataset into 3 folds, shuffles prior to the split, and uses a value of 1 for the pseudorandom number generator.

The split() function can then be called on the class where the data sample is provided as an argument. Called repeatedly, the split will return each group of train and test sets. Specifically, arrays are returned containing the indexes into the original data sample of observations to use for train and test sets on each iteration.

For example, we can enumerate the splits of the indices for a data sample using the created KFold instance as follows:

We can tie all of this together with our small dataset used in the worked example of the prior section.

Running the example prints the specific observations chosen for each train and test set. The indices are used directly on the original data array to retrieve the observation values.

Usefully, the k-fold cross validation implementation in scikit-learn is provided as a component operation within broader methods, such as grid-searching model hyperparameters and scoring a model on a dataset.

Nevertheless, the KFold class can be used directly in order to split up a dataset prior to modeling such that all models will use the same data splits. This is especially helpful if you are working with very large data samples. The use of the same splits across algorithms can have benefits for statistical tests that you may wish to perform on the data later.

Variations on Cross-Validation

There are a number of variations on the k-fold cross validation procedure.

Three commonly used variations are as follows:

  • Train/Test Split: Taken to one extreme, k may be set to 2 (not 1) such that a single train/test split is created to evaluate the model.
  • LOOCV: Taken to another extreme, k may be set to the total number of observations in the dataset such that each observation is given a chance to be the held out of the dataset. This is called leave-one-out cross-validation, or LOOCV for short.
  • Stratified: The splitting of data into folds may be governed by criteria such as ensuring that each fold has the same proportion of observations with a given categorical value, such as the class outcome value. This is called stratified cross-validation.
  • Repeated: This is where the k-fold cross-validation procedure is repeated n times, where importantly, the data sample is shuffled prior to each repetition, which results in a different split of the sample.

The scikit-learn library provides a suite of cross-validation implementation. You can see the full list in the Model Selection API.

Extensions

This section lists some ideas for extending the tutorial that you may wish to explore.

  • Find 3 machine learning research papers that use a value of 10 for k-fold cross-validation.
  • Write your own function to split a data sample using k-fold cross-validation.
  • Develop examples to demonstrate each of the main types of cross-validation supported by scikit-learn.

If you explore any of these extensions, I’d love to know.

Further Reading

This section provides more resources on the topic if you are looking to go deeper.

Posts

Books

API

Articles

Summary

In this tutorial, you discovered a gentle introduction to the k-fold cross-validation procedure for estimating the skill of machine learning models.

Specifically, you learned:

  • That k-fold cross validation is a procedure used to estimate the skill of the model on new data.
  • There are common tactics that you can use to select the value of k for your dataset.
  • There are commonly used variations on cross-validation, such as stratified and repeated, that are available in scikit-learn.

Do you have any questions?
Ask your questions in the comments below and I will do my best to answer.

Get a Handle on Statistics for Machine Learning!

Statistical Methods for Machine Learning

Develop a working understanding of statistics

…by writing lines of code in python

Discover how in my new Ebook:
Statistical Methods for Machine Learning

It provides self-study tutorials on topics like:
Hypothesis Tests, Correlation, Nonparametric Stats, Resampling, and much more…

Discover how to Transform Data into Knowledge

Skip the Academics. Just Results.

Click to learn more.

103 Responses to A Gentle Introduction to k-fold Cross-Validation

  1. Kristian Lunow Nielsen May 25, 2018 at 4:30 pm #

    Hi Jason

    Nice gentle tutorial you have made there!
    I have a more technical question; Can you comment on why the error estimate obtained through k-fold-cross-validation is almost unbiased? with an emphasis on why.

    I have had a hard time finding literature describing why.
    It is my understanding that everyone comments on the bias/variance trade-off when asked about the almost unbiased feature of k-fold-cross-validation.

    • Jason Brownlee May 26, 2018 at 5:48 am #

      Thanks.

      Good question.

      We repeat the model evaluation process multiple times (instead of one time) and calculate the mean skill. The mean estimate of any parameter is less biased than a one-shot estimate. There is still some bias though.

      The cost is we get variance on this estimate, so it’s good to report both mean and variance or mean and stdev of the score.

  2. Vladislav Gladkikh May 25, 2018 at 7:25 pm #

    Another possible extension: stratified cross-validation for regression. It is not directly implemented in Scikit-learn, and there is discussion if it worth implementing or not: https://github.com/scikit-learn/scikit-learn/issues/4757 but this is exactly what I need in my work. I do it like this:

    • Vladislav Gladkikh May 25, 2018 at 7:26 pm #

      How to make code formatting here?

      • Jason Brownlee May 26, 2018 at 5:53 am #

        You can use PRE HTML tags. I formatted your code for you.

    • Jason Brownlee May 26, 2018 at 5:53 am #

      Thanks for sharing!

  3. hayet May 29, 2018 at 10:46 pm #

    Should be used k cross-validation in deep learning?

    • Jason Brownlee May 30, 2018 at 6:44 am #

      It can be for small networks/datasets.

      Often it is too slow.

  4. Chan June 8, 2018 at 9:45 pm #

    Dear Jason,

    Thanks for this insight ,especially the worked example section. It’s very helpful to understand the fundamentals. However, I have a basic question which I didn’t understand completely.
    If we throw away all the models that we learn from every group (3 models in your example shown), what would be the final model to predict unseen /test data?

    Is it something like:

    We are using cross-validation only to choose the right hyper-parameter for a model? say K for KNN.
    1. We fix a value of K;train and cross-validate to get three different models with different parameters (/coefficients like Y=3x+2; Y=2x+3; Y=2.5X+3 = just some random values)
    2. Every model has its own error rate. Average them out to get a mean error rate for that hyper-parameter setup / values
    3. Try with other values of Hyper-parameters (step 1 and 2 repetitively for all set of hyper-parameter values)

    4. Choose the hyper-parameter set with the least average error
    5. Train the whole training data set (without any validation split this time) with new value of hyper-parameter and get the new model [Y=2.75X+2.5 for eg.,]
    6. Use this as a model to predict the new / unseen / test data. Loss value would be the final error from this model

    Is this the way? or May be I understood it completely wrong.

    Sorry for this naive question as I’m quite new or just a started. Thanks for your understanding 🙂

  5. teja_chebrole June 21, 2018 at 9:40 pm #

    awesome article..very useful…

  6. M.sarat chandra July 7, 2018 at 5:32 pm #

    if loocv is done it increase the size of k as datasets increase size .what would u say abt this.
    when to use loocv on data. what is use of pseudo random number generator.

    • Jason Brownlee July 8, 2018 at 6:17 am #

      In turn it increases the number of models to fit and the time it will take to evaluate.

      The choice of random numbers does not matter as long as you are consistent in your experiment.

  7. marison July 10, 2018 at 4:20 pm #

    hi,

    1. can u plz provide me a code for implementing the k-fold cross validation in R ?

    2. do we have to do cross validation on complete data set or only on the training dataset after splitting into training and testing dataset?

  8. Zhian July 16, 2018 at 7:36 pm #

    Hello,

    Thank you for the great tutorial. I have one question regarding the cross validation for the data sets of dynamic processes. How one could do cross validation in this case? Assume we have 10 experiments where the state of the system is the quantity which is changing in time (initial value problem). I am not sure here one should shuffle the data or not. Shall I take the whole one experiment as a set for cross validation or choose a part of every experiment for that purpose? every experiment contain different features which control the state of the system. When I want to validate I would like to to take the initial state of the system and with the vector of features to propagate the state in time. This is exactly what I need in practice.

    Could you please provide me your comments on that. I hope I am clear abot my issue.
    Thanks.

  9. Tamara August 8, 2018 at 5:29 am #

    Hi Jason,
    Firstly, your tutorials are excellent and very helpful. Thank you so much!
    I have a question related to the use of k-fold cross-validation (k-fold CV) in testing the validity of a neural network model (how well it performs for new data). I’m afraid there is some confusion in this field as k-fold CV appears to be required for justifying any results.
    So far I understand we can use k-fold CV to find optimal parameters while defining the network (as accuracy for train and test data will tell when it is over or under fitting) and we can make the choices that ensure good performance. Once we made these choices we can run the algorithm for the entire training data and we generate a model. This model has to be then tested for new data (validation set and training set). My question is: on how many new data sets has this model to be tested din order to be considered useful?
    Since we have a model, using again k-fold CV does not help (we do not look for a new model). I my understanding the k-fold CV testing is mainly for the algorithm/method optimization while the final model should be only tested on new data. Is this correct? if so, should I split the test data into smaller sets, and use these as multiple tests, or using just the one test data set is enough?

    Many thanks,
    Tamara

  10. ashish August 14, 2018 at 7:21 pm #

    Hi jason , thanks for a nice blog

    my dataset size is 6000 (image data). how do we know which type of cross validation should use (simply train test split or k- fold cross validation) .

  11. Carlos August 16, 2018 at 2:46 am #

    Good morning!

    I am an Economics Student at University of São Paulo and I am researching about Backtesting, Stress Test and Validation Models to Credit Risk. Thus, would you help me answering some questions? I researching how to create a good procedure to validate prediction models that tries to forecast default behavior of the agents. Thereby, suppose a log-odds logit model of Default Probability that uses some explanatory variables as GDP, Official Interest Rates, etc. In order to evaluate it, I calculate the stability and the backtesting, using part of my data not used in the estimation with this purpose. In the backtesting case, I use a forecast, based on the regression of relevant variables to perceive if my model is corresponding to the forecast that has interval of confidence to evaluate if they are in or out. Furthermore, I evaluate the signal of the parameters to verify if it is beavering according to the economic sense.
    After reading some papers, including your publication here and a Basel one (“Sound Practices for Backtesting Counterparty Credit Risk Models”), I have some doubts.

    1) Do a pattern backtesting procedure lead completely about the overfitting issue? If not, which the recommendations to solve it?
    2) What are the issues not covered by a pattern backtesting procedure and we should pay attention using another metrics to lead with them?
    3) Could you indicate some paper or document that explains about Back-pricing, conception introduced by “Sound Practices for Backtesting Counterparty Credit Risk Models”? I have not found another document and I had not understood their explanation.
    “A bank can carry out additional validation work to support the quality of its models by carrying out back-pricing. Back-pricing, which is similar to backtesting, is a quantitative comparison of model predictions with realizations, but based on re-running current models on historical market data. In order to make meaningful statements about the performance of the model, the historical data need to be divided into distinct calibration and verification data sets for each initialization date, with the model calibrated using the calibration data set before the initialization date and the forecasts after initialization tested on the verification data sets. This type of analysis helps to inform the effectiveness of model remediation, ie by demonstrating that a change to the model made in light of recent experience would have improved past and present performance. An appropriate back-pricing allows extending the backtesting data set into the past.”

    Thus, I appreciate your attention and help.

    The best regards.

  12. Scott Miller September 6, 2018 at 11:48 pm #

    Hi Jason, I’m using k-fold with regularized linear regression (Ridge) with the objective to determine the optimial regularization parameter.

    For each regularization parameter, I do k-fold CV to compute the CV error.

    I then select the regularization parmeter that achieves the lowest CV error.

    However, in k-fold when I use ‘shuffle=True’ AND no ‘random_state’ in k-fold, the optimal regularization parameter changes each time I run the program.

    kf=KFold(n_splits=n_kfolds, shuffle=True)

    If I use a random state or ‘shuffle = False’, the results are always the same.

    Question: Do you feel this is normal behavior and any recommendations.

    note: Predictions are really good, just looking for general discussion.

    Thanks.

    • Jason Brownlee September 7, 2018 at 8:06 am #

      Yes, it might be a good idea to repeat each experiment to counter the variance of the model.

      Going even one step further, you might even want to use statistical tests to help determine whether “better” is real or noise. I have tutorials on this under the topic of statistics I believe.

  13. Pascal Schmidt October 4, 2018 at 1:35 pm #

    Hi Jason,

    thank you for the great tutorial. It helped me a lot to understand cross-validation better.
    There is one concept I am still unsure about and I was hoping you could answer this for me please.

    When I do feature selection before cross validation then my error will be biased because I chose the features based on training and testing set (data leakage). Therefore, I believe I have to do feature selection inside the cross validation loop with only the training data and then test my model on the test data.

    So my question is when I end up with different predictors for the different folds, should I choose the predictors that occured the majority of the time? And after that, should I do cross validation for this model with the same predictors? So, do k-fold cv with my final model where every predictor is the same for the different folds? And then use this estimate to be my cv error?

    It would be really great if you could help me out. Thanks again for the article and keep up the great work.

    • Jason Brownlee October 4, 2018 at 3:30 pm #

      Thanks.

      Correct. Yes, you will get different features, and perhaps you can take the average across the findings from each fold.

      Alternately, you can use one hold out dataset to choose features, and a separate set for estimating model performance/tuning.

      It comes down to how much data you have to “spend” and how much leakage/bias you can handle. We almost never have enough data to be pure.

      • Pascal Schmidt October 6, 2018 at 3:32 am #

        Thanks, Jason. I guess statistics is not as black and white as a discipline like mathematics. A lot of different ways to deal with problems and no one best solution exists. This makes it so challenging I feel. A lot of experience is required to deal with all these unique data sets.

        • Jason Brownlee October 6, 2018 at 5:50 am #

          Yes, the best way to get good is to practice, like programming, driving, and everything else we want to do in life.

  14. Bilal October 16, 2018 at 6:16 pm #

    for which purpose we calculate the standard deviation from any data set.

  15. Leontine Ham October 16, 2018 at 9:21 pm #

    Thank you for explaining the fundamentals of CV.
    I am working with repeated (50x) 5-fold cross validation, but I am trying to figure out which statistical test I can use in order to compare two datasets. Can you help me? Or is that out of the scope of this blog?

  16. kingshuk October 22, 2018 at 1:27 am #

    Hi Jason ,

    What is the difference between Kfold and Stratified K fold?

    • Jason Brownlee October 22, 2018 at 6:21 am #

      Kfold uses random split of the into k folds.
      Stratified tries to maintain the same distribution of the target variable when randomly selecting examples for each fold.

  17. Rana Muhammad Kashif December 5, 2018 at 3:30 pm #

    Thanks for this post!

    Can we split the data by ourselves and then train some data and test the remaining?
    For example, my data is on cricket and i want to train the data based on two splits i.e. 0-6 overs and 7-15 overs, and test the 16-20 overs data in a 20 overs match. Is it rational? If yes how can we do this within R?

  18. Ruslan December 5, 2018 at 10:19 pm #

    Hi Jason! Good article!

    What should we do when not all parts are equal? Say we have 5 5 5 5 6 or 7 7 7 8 or 9 9 9 9 8

    Should we skip the biggest/least one? Should we apply weighting somehow? Do the same as if it had the same size?

    Thank you.

    • Jason Brownlee December 6, 2018 at 5:55 am #

      Try to make each fold equal, but if they are mostly equal, that is okay.

  19. Jason Quadras January 17, 2019 at 1:08 am #

    Very Good article. Simple and easy to understand!

  20. Rose January 17, 2019 at 3:44 pm #

    Hi Jason
    Thanks for this post !
    How to evaluate the overall accuracy of learning classifiers in K folds cross validation ?
    I think that
    Accuracy = (sum of accuracy in each folds )/K;
    This is true or false ?

    • Jason Brownlee January 18, 2019 at 5:28 am #

      Yes, the average of the accuracy scores of the model as calculated across the test folds.

  21. Oscar January 22, 2019 at 3:05 am #

    Hello Jason,

    One of the best tutorials on CV that I have found. But there is still something I don’t get. What is the point of doing all this if in the end you just discard the models? I’ve been having a lot of problems with this, because I find different information in different places:

    * In some tutorials, it is said that you use always the same model for training and validation iteratively, keeping a test set independent for when you finish training with CV, so you can check if your model is good.
    * In other tutorials, it is said that you create one independent model on each iteration, and then you keep the one that gave you the best test results. But if this is the case, then why would I want to calculate the average of the accuracy scores, if I only care about the best one.

    Hope you can help me, I am really having some trouble with all of this.

  22. Iman February 28, 2019 at 12:17 pm #

    I have question on selecting data when it comes to multiple linear regression in the form, y = B0 + B1X1 +B2X2
    Say,
    Y (response) = dataset 0 (i.e 3,4,5,6,7,8)
    X1 (predictor)= dataset 1 (i.e 1,5,7,9,4,5)
    X2(predictor) = datset 2 (i.e 7,4,6,-2,1,3)

    Do you take all the data into account and divide into k groups,
    Ie [3,4],[5,6],[7,8],[1,5],[7,9],[4,5],[7,4],[6,-2],[1,3]

    Or just one dataset at time, such as,
    Y and corresponding values x1
    I.e [3,4] to [1,5] …..
    Y and corresponding values x2

    Or is it some other way you select the data?
    Thanks

  23. Vandana March 6, 2019 at 9:18 pm #

    Your articles are the best. Every time I have a doubt machinelearningmastery solves it for me. Thanks a lot 🙂

  24. heldie March 7, 2019 at 7:51 pm #

    Good explanation sir, ty 🙂 I have some clarity missing regarding the application of K-Fold CV for finidng – how many knots, where to place knots in case of piecewise polynomials / Regression Splines. Can u pls explain.

    • Jason Brownlee March 8, 2019 at 7:47 am #

      Sorry, I don’t have a tutorial on “regression splines”.

      • heldie March 8, 2019 at 9:48 pm #

        thx 4 d reply sir, in order to choose a best-fit degree of the polynomial, how K-Fold CV can be applied, pls explain Sir, thanks in adv 🙂

        • Jason Brownlee March 9, 2019 at 6:27 am #

          I recommend a grid search over different model configurations, this is unrelated to k-fold cross validation, although CV could be used for each configuration tested.

  25. Rahil March 22, 2019 at 7:01 am #

    Hi Jason, many thanks for the tutorial. It clarified many things for me, however, I am newbei in this fied. My question is how many times we can do a CV for a model?
    For example is it reseanable to repeat 100 times 10-fold CV for our model?
    I really appreciate any hint that can help me out.
    Thanks!

    • Jason Brownlee March 22, 2019 at 8:44 am #

      We repeat the CV process to account for the variance of the model itself, e.g. due to a stochastic learning algorithm like SGD.

      Often a few repeats is sufficient, e.g 10, no more than 30.

      • Rahil March 22, 2019 at 7:23 pm #

        Many Thanks for the reply Jason.
        I am still confused.
        when we are using 10-fold CV. It means that we partitioned our data randomely in 10 equal subsamples and then we keep one subsample for test and use others (9 subsamples) for train.
        So in this case only for 10 times we can get different results because there are just 10 different options to be kept for test and others to be used for train.
        I mean after 10 times the way of arranging the data for train and test will be the same as one the previous states, right?! So, what is the advantage of repeating the process more than 10 times?
        Please help me out of this confusion. Thanks!

        • Jason Brownlee March 23, 2019 at 9:19 am #

          Some algorithms will produce different results on the same dataset due to the stochastic nature of the learning algorithm. Stochastic gradient descent is an example.

          This will introduce additional variance in the estimate of model performance that can be countered by repeating the evaluation more times.

          • Rahil March 23, 2019 at 6:17 pm #

            Many thanks Jason!!

  26. Federico March 26, 2019 at 10:51 pm #

    Hi Jason,
    A quick question, if you decide to gather performance metrics from instances not used to train
    the model recurring to an evaluation scheme based on training-testing splits. Which
    fold-based evaluation scheme is more adequate? Why?

    • Jason Brownlee March 27, 2019 at 9:00 am #

      If you are unsure what to use, the default should be 10 fold cross validation.

      • Federico March 28, 2019 at 2:52 am #

        Why is that?

        • Jason Brownlee March 28, 2019 at 8:20 am #

          It has proven effective as a default in terms of a balance between bias and variance of the estimated model performance.

          This was established decades ago too, and has stood the test of time well.

  27. itisha March 28, 2019 at 6:36 pm #

    Hello sir,
    i want to get the result of 10 fold cross validation on my training data in terms of accuracy score.
    I performed grid search to find the hyperparameters of classifier and used cv value =10 in grid search function.i got the optimised parameters value and also the best score in terms of accuracy through grid search results.
    a) is that accuracy (obtained by grid search) can be considered as the result of 10 fold cross validation?
    b) if not, then should i use cross_val_score( ) to get the mean accuracy of 10 fold?
    c) Also, while passing classifier in cross_val_score ( ) should i use optimised parameters of classifiers?

    • Jason Brownlee March 29, 2019 at 8:28 am #

      You can report the score from CV if you want.

      I would prefer a standalone final evaluation on a hold out dataset or CV to confirm the finding.

      Yes, you should configure the final classifier with the best found parameters.

  28. Itisha March 29, 2019 at 10:25 am #

    Ok thanks sir

  29. Itisha March 29, 2019 at 10:35 am #

    I have. A query whuch is not relates to I told

    Lets say classifier 1 is final classifier with optimized hyperparameters that m going to test on dataset A. Classifier 1 is trained on feature vectors of size 20.

    Now I want to test on A again but this time with reduced features just to check impact of different features.

    In this way I want to present the results on test set A with classifier trained on full feature set 20 nd same classifier trained on reduced feature set.

    So should I use the same optimized hyperparameters with the classifier to be trained on reduced feature set?

    • Jason Brownlee March 29, 2019 at 2:02 pm #

      Good question.

      I recommend varying one thing in a comparison, e.g. just the features and use the same data and model.

      Alternately, you can vary one thing, the features, then use the same “process” of tuning each model for each subset of features.

      Both are reasonable.

  30. Itisha March 29, 2019 at 5:28 pm #

    Ok so if I go with first option…that means test data should be same nd classifier used for testing with original nd reduced features should be same with same optimized hyperparameters. ?

    I have only one confusion:
    Let’s say classifier is svm with c=10 ( obtained by grid search on train data).
    Now I ttrain svm with c=10 on entire taining set with feature vectors of size 20 andthen evalute it on test set T

    Now what i want is to evaluate same svm on same set T but on feature of size 15

    So this time should I use c =10 again with svm or should I again perform grid search to get a new c value?

    • Jason Brownlee March 30, 2019 at 6:24 am #

      It is your choice, as long as you are consistent in methodology between the two things being compared.

  31. Maria March 31, 2019 at 8:21 am #

    For an imbalanced dataset with 0.7 positive class and 0.3 negative class. How do you do a cross-validation while preserving 50% positive and 50% negative samples in the train and test sets?

  32. AVIJIT PRASAD DAS April 4, 2019 at 8:05 am #

    really, its quite worthy

  33. syed April 16, 2019 at 4:37 pm #

    Nice Tutorial!!! Enjoyed It !!
    can you provide me the Matlab code for K-Fold Cross validation
    Thank You

  34. rolf May 27, 2019 at 11:30 pm #

    I don’t really understand what you mean by

    > Train/Test Split: Taken to one extreme, k may be set to 1 such that a single train/test split is created to evaluate the model.

    … if k=1, then you are not dividing your data into parts: There is only one part.

    Could you explain what you mean? Note also, that sklearn.model_selection.kfold does not accept k=1 as an input

    • Jason Brownlee May 28, 2019 at 8:15 am #

      You are right, k=2 is the smallest we can do.

      I have updated the post, thanks!

  35. Sara June 11, 2019 at 6:50 am #

    Does ‘scikit-learn train_test_split’ consider values of features and targets when shuffling and spliting the dataset?

    Thank you

  36. toy July 4, 2019 at 12:59 pm #

    Thank you Jason 🙂 I’m BIG fan of yours. Best!

  37. RAVI July 6, 2019 at 12:59 am #

    Jason sir, this K-fold CV tutorial is very helpful to me. Thank you so much !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

  38. Quentin July 11, 2019 at 7:13 pm #

    Hi, thanks for this introduction,

    I’m working on very small dataset ( 31 data) with 107 features. I have to apply features selection. For that I use XGBOOST and RFECV and other techniques.

    I have one question :

    Do I have to first split my dataset into 80% train and 20% test and apply an k-fold cross validation onto the train part and verify with the 20% remaining ? Or, do k-fold cross-validation without any split before ?

    • Jason Brownlee July 12, 2019 at 8:33 am #

      It might be a good idea to perform feature selection within each fold of the k-fold cross validation – e.g. test the procedure for selecting features rather than a specific set of features.

      • Quentin July 16, 2019 at 4:49 pm #

        Thanks, but if I want to show that a specific set of features remains the best. How can I do that ?
        I have to repeat n times a k-fold cv with a technique of selection and a different random seed. Then I compare all the arrays of features selected in the n loop with the score ( accuracy or F1)
        And so on for the other techniques ?

        • Jason Brownlee July 17, 2019 at 8:16 am #

          Sounds like a reasonable approach.

          Remember, we cannot know what is best, only gather evidence for what is good relative to other methods we test.

  39. Shivani July 18, 2019 at 6:52 pm #

    I have been working on 10fold cross validation.In the predicted labels(Logistic Regression classifier),I am getting like this:
    0.32460216486734716
    -1.6753312636704334
    1.811621906115853
    0.19109397406265038
    -2.11867198332618
    -1.4679812760800461
    0.02600304205260273
    -2.0000670438930332
    I dnt know how to tackle with negative and non binary values.Please help.

  40. R.Aser August 5, 2019 at 6:29 pm #

    Hello,
    I Have two questions:
    1. I have a dataset, I used k=5 and 10 but some times I found there was a large difference in the R2, MAE and RMSE (i.e. for K=10, R=0.8 – MAE=3.5 – RMSE=6.5 , for K=5, R=0.62 – MAE=4.8 – RMSE=9.4) what is the reason of that difference? In other words, how to select the correct K which provide me reliable results?
    I know that there might a difference in using K=5 and 10 but m=not large one.

    2. If the dataset contains 8 independent variables, four of them are binary variables (0/1) for regression problem, How can I use cross validation to ensure that each fold contains 0 and for each binary variable? Because if this does not happen, Rstudio gives me warning that there is misleading results.

    Thanks in advance,
    R.Aser

    • Ramy August 6, 2019 at 9:02 am #

      Hello Jason,
      Do you need me to describe more to understand my point

      • Jason Brownlee August 6, 2019 at 2:04 pm #

        Good questions.

        Choosing a good K is hard. If in doubt, use 10. If you have the time, perhaps evaluate descriptive statistics of the data with different size K and find a point at which statistical significance tests report a difference in distribution – it is crude but might be a useful start.

        Perhaps you can use stratified cross validation that focuses not only on the target, but on input variables as well?

        I hope that helps.

  41. Ponraj August 6, 2019 at 5:48 am #

    Hello Jason,

    I split this post as BACK GROUND & QUESTION Section.

    BACK GROUND :
    I am performing Binary Classification task using LSTM’s. (either 0 or 1)
    Data_size (205, 100, 4) [Out of 205 samples 110 belongs to class 0 & 95 belongs to Class1]

    train_test_split : (train : 85 % & test : 15 % , random_seed = 7)
    Fixed train data shape = (174,100,6)
    Fixed test Data = (31,100,6)

    Step 1: – MODEL TRAINING
    I train the model (No random_seed weight intialization (like no numpy seed or tf seed) )
    1.1) Model Structure picture link : https://imgur.com/2IljyvE
    1.2) Plot the Acc & Loss graph (both train & Validate)
    – Picture Link : https://imgur.com/IduKcUp
    – No Overfitting
    1.3) Prediction result : using trained model : 3 out of 31 testing data were wrong.
    (91 % correct prediction)

    Step 2 : – MULTIPLE TIMES RUN
    Used For loop
    and trained the model 5 times to see behavior of the model based on your post (https://machinelearningmastery.com/diagnose-overfitting-underfitting-lstm-models/)

    2.1) for i in range(5) : # run 5 times with same model structure
    – Plot the Acc & Loss graph (Picture Link :https://imgur.com/WNH6m9F)
    – RESULT : It follows a pattern (found behavior of the model)

    Step 3: – K FOLD CROSS VALIDATION (CV)
    Performed K fold CV (Fold – 7 ) (random seed = 7) (merged train + test data = original data (205,100,6))
    3.1) Picture link : https://imgur.com/cZfR1wJ
    3.2) Some folds results in Over fitting
    3.3) Every fold the acc value calculated and mean acc value is 79.46 % (+/- 5.60 %)
    (I followed your post : https://machinelearningmastery.com/evaluate-performance-machine-learning-algorithms-python-using-resampling/)

    QUESTIONS ONLY ABOUT CROSS VALIDATION :
    1. On cross validation results, more number of Over fitted model/graphs found,

    a) What can I understood from CV results ? improper hyper parameters ?
    b) Std. deviation of +/- 6% is huge or it is normal ?
    c)How can I relate my trained model result (Step:1) with CV results (Step: 3) ? I understand how it works but can I use initial trained model as a final model since my prediction is 90 % correct ?
    d) I reduced LSTM units size and performed K fold CV again.
    Picture link : https://imgur.com/UsU3zso (Less Overfit models)
    Mean Acc & Std : 79% +/- 3.91
    Based on Std dev, whether i should fix with this hyper parameter in model ?
    e) My friend suggested me to go for LOOCV, but will that make any difference ?

    • Jason Brownlee August 6, 2019 at 6:45 am #

      Way too much going on there, sorry, I cannot follow or invest the time to figure it out.

      Are you able to boil your problem down to one brief question?

      • Ponraj August 6, 2019 at 7:36 pm #

        I trained my LSTM binary classification model and gets prediction accuracy of 90 %.
        No over fitting occurs. (https://imgur.com/IduKcUp)

        But when I do K fold CV (K = 7), I can found over fitting models in those 7 folds.
        What can i understood from over fitting in CV models ? (https://imgur.com/cZfR1wJ)

        On CV results, i get the mean accuracy of 79.5 % & Std. deviation of +/- 6%.
        Is there any limit, if my mean acc value should be > than some %, is considered as a good performing model where the hyper parameters chosen is the best ?

        I reduced LSTM units size and performed K fold CV again.
        Results : Mean Acc & Std : 79% +/- 3.91
        (https://imgur.com/UsU3zso – Less Overfit models)
        Since my Std dev is low compared to previous model, whether i should fix with this hyper parameter in model ?

        My friend suggested me to go for LOOCV, but will that make any difference instead K fold CV ?

        • Jason Brownlee August 7, 2019 at 7:49 am #

          In practice, k-fold cross validation is a bad idea for sequence data/LSTMs, instead, you must use walk-forward validation:
          https://machinelearningmastery.com/backtest-machine-learning-models-time-series-forecasting/

          Perhaps the datasets used in k-fold cross validation are smaller and less representative and in turn result in overfitting?

          Model performance is always relative:
          https://machinelearningmastery.com/faq/single-faq/how-to-know-if-a-model-has-good-performance

          LOOCV sounds like a good idea if you have the resources.

          • Ponraj August 8, 2019 at 9:05 pm #

            thanks for your reply.
            I understood your post related to walk-forward validation.But I am confused, whether it can be applied for my Data set. (Since I am performing classification)

            Overview about my Dataset : X.shape= (205,100,4) and Y.shape = (205,)

            In X, each sample/sequence is of shape (100, 4), whereas each row in 100 rows corresponds to 100 milli sec.(10 sec for 1 sample)
            Out of 210 samples, 110 samples belongs to class 0 & 95 Samples belongs to class 1.

            Model Structure : https://imgur.com/2IljyvE
            Model : https://imgur.com/tdfxf3l
            Note : Used TimeDistributed Wrapper around Dense layer so that my model gets trained for each 100 ms corresponds to respective class for every sample/sequence.

            My aim is to predict early the Class, If i input, test data of shape (10,60,4) –
            (10 samples, 60 (6 seconds), 4 features) whether it belongs to class 0 or 1.

            In that case, how can I approach Walk forward validation

          • Jason Brownlee August 9, 2019 at 8:12 am #

            Yes, this would be a time series classification task which can be evaluated with walk forward validation.

            I give examples of time series classification here that you can use as a starting point:
            https://machinelearningmastery.com/start-here/#deep_learning_time_series

  42. Marshal August 9, 2019 at 3:39 am #

    Good day Jason,

    Thank you for all of your tutorials, they are very clear and helpful.

    Which method for calculating R2 for the evaluation of the test set is appropriate?

    I ask because it seems that the caret package in R defaults to R2 = cor(obs, pred)^2, but I thought 1 – sum((obs – pred)^2) / sum((obs – mean)^2) was most appropriate. Both methods give the same result on the full data set, but I am getting different results when I use them on the test sets (higher R2 for cor()^2).

    I’m using the caret package to cross validate a predictive linear model that I have built. I’m using train function with trainControl method = repeatedcv and the summary default of RMSE and Rsquared. I get high R2 when I cross validate using caret, but a lower value when I manually create folds and test them.

    Any insight or direction would be greatly appreciate.

    Thank you

  43. SHAIKH MOHD FARAZ August 11, 2019 at 5:01 pm #

    Hii Jason

    Very nice and clear tutorial on K-fold validation.

    I have one doubt. Let’s say we are implementing a K-fold cv on K’-NN algorithm.
    Since we will be using the cv dataset to determine the best value of K’ and then use test dataset to determine the accuracy of the model, How do you think we should split our dataset? Can you please explain with an example.

Leave a Reply