Train-Test Split for Evaluating Machine Learning Algorithms

The train-test split procedure is used to estimate the performance of machine learning algorithms when they are used to make predictions on data not used to train the model.

It is a fast and easy procedure to perform, the results of which allow you to compare the performance of machine learning algorithms for your predictive modeling problem. Although simple to use and interpret, there are times when the procedure should not be used, such as when you have a small dataset and situations where additional configuration is required, such as when it is used for classification and the dataset is not balanced.

In this tutorial, you will discover how to evaluate machine learning models using the train-test split.

After completing this tutorial, you will know:

  • The train-test split procedure is appropriate when you have a very large dataset, a costly model to train, or require a good estimate of model performance quickly.
  • How to use the scikit-learn machine learning library to perform the train-test split procedure.
  • How to evaluate machine learning algorithms for classification and regression using the train-test split.

Kick-start your project with my new book Machine Learning Mastery With Python, including step-by-step tutorials and the Python source code files for all examples.

Let’s get started.

Train-Test Split for Evaluating Machine Learning Algorithms

Train-Test Split for Evaluating Machine Learning Algorithms
Photo by Paul VanDerWerf, some rights reserved.

Tutorial Overview

This tutorial is divided into three parts; they are:

  1. Train-Test Split Evaluation
    1. When to Use the Train-Test Split
    2. How to Configure the Train-Test Split
  2. Train-Test Split Procedure in Scikit-Learn
    1. Repeatable Train-Test Splits
    2. Stratified Train-Test Splits
  3. Train-Test Split to Evaluate Machine Learning Models
    1. Train-Test Split for Classification
    2. Train-Test Split for Regression

Train-Test Split Evaluation

The train-test split is a technique for evaluating the performance of a machine learning algorithm.

It can be used for classification or regression problems and can be used for any supervised learning algorithm.

The procedure involves taking a dataset and dividing it into two subsets. The first subset is used to fit the model and is referred to as the training dataset. The second subset is not used to train the model; instead, the input element of the dataset is provided to the model, then predictions are made and compared to the expected values. This second dataset is referred to as the test dataset.

  • Train Dataset: Used to fit the machine learning model.
  • Test Dataset: Used to evaluate the fit machine learning model.

The objective is to estimate the performance of the machine learning model on new data: data not used to train the model.

This is how we expect to use the model in practice. Namely, to fit it on available data with known inputs and outputs, then make predictions on new examples in the future where we do not have the expected output or target values.

The train-test procedure is appropriate when there is a sufficiently large dataset available.

When to Use the Train-Test Split

The idea of “sufficiently large” is specific to each predictive modeling problem. It means that there is enough data to split the dataset into train and test datasets and each of the train and test datasets are suitable representations of the problem domain. This requires that the original dataset is also a suitable representation of the problem domain.

A suitable representation of the problem domain means that there are enough records to cover all common cases and most uncommon cases in the domain. This might mean combinations of input variables observed in practice. It might require thousands, hundreds of thousands, or millions of examples.

Conversely, the train-test procedure is not appropriate when the dataset available is small. The reason is that when the dataset is split into train and test sets, there will not be enough data in the training dataset for the model to learn an effective mapping of inputs to outputs. There will also not be enough data in the test set to effectively evaluate the model performance. The estimated performance could be overly optimistic (good) or overly pessimistic (bad).

If you have insufficient data, then a suitable alternate model evaluation procedure would be the k-fold cross-validation procedure.

In addition to dataset size, another reason to use the train-test split evaluation procedure is computational efficiency.

Some models are very costly to train, and in that case, repeated evaluation used in other procedures is intractable. An example might be deep neural network models. In this case, the train-test procedure is commonly used.

Alternately, a project may have an efficient model and a vast dataset, although may require an estimate of model performance quickly. Again, the train-test split procedure is approached in this situation.

Samples from the original training dataset are split into the two subsets using random selection. This is to ensure that the train and test datasets are representative of the original dataset.

How to Configure the Train-Test Split

The procedure has one main configuration parameter, which is the size of the train and test sets. This is most commonly expressed as a percentage between 0 and 1 for either the train or test datasets. For example, a training set with the size of 0.67 (67 percent) means that the remainder percentage 0.33 (33 percent) is assigned to the test set.

There is no optimal split percentage.

You must choose a split percentage that meets your project’s objectives with considerations that include:

  • Computational cost in training the model.
  • Computational cost in evaluating the model.
  • Training set representativeness.
  • Test set representativeness.

Nevertheless, common split percentages include:

  • Train: 80%, Test: 20%
  • Train: 67%, Test: 33%
  • Train: 50%, Test: 50%

Now that we are familiar with the train-test split model evaluation procedure, let’s look at how we can use this procedure in Python.

Train-Test Split Procedure in Scikit-Learn

The scikit-learn Python machine learning library provides an implementation of the train-test split evaluation procedure via the train_test_split() function.

The function takes a loaded dataset as input and returns the dataset split into two subsets.

Ideally, you can split your original dataset into input (X) and output (y) columns, then call the function passing both arrays and have them split appropriately into train and test subsets.

The size of the split can be specified via the “test_size” argument that takes a number of rows (integer) or a percentage (float) of the size of the dataset between 0 and 1.

The latter is the most common, with values used such as 0.33 where 33 percent of the dataset will be allocated to the test set and 67 percent will be allocated to the training set.

We can demonstrate this using a synthetic classification dataset with 1,000 examples.

The complete example is listed below.

Running the example splits the dataset into train and test sets, then prints the size of the new dataset.

We can see that 670 examples (67 percent) were allocated to the training set and 330 examples (33 percent) were allocated to the test set, as we specified.

Alternatively, the dataset can be split by specifying the “train_size” argument that can be either a number of rows (integer) or a percentage of the original dataset between 0 and 1, such as 0.67 for 67 percent.

Repeatable Train-Test Splits

Another important consideration is that rows are assigned to the train and test sets randomly.

This is done to ensure that datasets are a representative sample (e.g. random sample) of the original dataset, which in turn, should be a representative sample of observations from the problem domain.

When comparing machine learning algorithms, it is desirable (perhaps required) that they are fit and evaluated on the same subsets of the dataset.

This can be achieved by fixing the seed for the pseudo-random number generator used when splitting the dataset. If you are new to pseudo-random number generators, see the tutorial:

This can be achieved by setting the “random_state” to an integer value. Any value will do; it is not a tunable hyperparameter.

The example below demonstrates this and shows that two separate splits of the data result in the same result.

Running the example splits the dataset and prints the first five rows of the training dataset.

The dataset is split again and the first five rows of the training dataset are printed showing identical values, confirming that when we fix the seed for the pseudorandom number generator, we get an identical split of the original dataset.

Stratified Train-Test Splits

One final consideration is for classification problems only.

Some classification problems do not have a balanced number of examples for each class label. As such, it is desirable to split the dataset into train and test sets in a way that preserves the same proportions of examples in each class as observed in the original dataset.

This is called a stratified train-test split.

We can achieve this by setting the “stratify” argument to the y component of the original dataset. This will be used by the train_test_split() function to ensure that both the train and test sets have the proportion of examples in each class that is present in the provided “y” array.

We can demonstrate this with an example of a classification dataset with 94 examples in one class and six examples in a second class.

First, we can split the dataset into train and test sets without the “stratify” argument. The complete example is listed below.

Running the example first reports the composition of the dataset by class label, showing the expected 94 percent vs. 6 percent.

Then the dataset is split and the composition of the train and test sets is reported. We can see that the train set has 45/5 examples in the test set has 49/1 examples. The composition of the train and test sets differ, and this is not desirable.

Next, we can stratify the train-test split and compare the results.

Given that we have used a 50 percent split for the train and test sets, we would expect both the train and test sets to have 47/3 examples in the train/test sets respectively.

Running the example, we can see that in this case, the stratified version of the train-test split has created both the train and test datasets with 47/3 examples in the train/test sets as we expected.

Now that we are familiar with the train_test_split() function, let’s look at how we can use it to evaluate a machine learning model.

Train-Test Split to Evaluate Machine Learning Models

In this section, we will explore using the train-test split procedure to evaluate machine learning models on standard classification and regression predictive modeling datasets.

Train-Test Split for Classification

We will demonstrate how to use the train-test split to evaluate a random forest algorithm on the sonar dataset.

The sonar dataset is a standard machine learning dataset composed of 208 rows of data with 60 numerical input variables and a target variable with two class values, e.g. binary classification.

The dataset involves predicting whether sonar returns indicate a rock or simulated mine.

No need to download the dataset; we will download it automatically as part of our worked examples.

The example below downloads the dataset and summarizes its shape.

Running the example downloads the dataset and splits it into input and output elements. As expected, we can see that there are 208 rows of data with 60 input variables.

We can now evaluate a model using a train-test split.

First, the loaded dataset must be split into input and output components.

Next, we can split the dataset so that 67 percent is used to train the model and 33 percent is used to evaluate it. This split was chosen arbitrarily.

We can then define and fit the model on the training dataset.

Then use the fit model to make predictions and evaluate the predictions using the classification accuracy performance metric.

Tying this together, the complete example is listed below.

Running the example first loads the dataset and confirms the number of rows in the input and output elements.

Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.

The dataset is split into train and test sets and we can see that there are 139 rows for training and 69 rows for the test set.

Finally, the model is evaluated on the test set and the performance of the model when making predictions on new data has an accuracy of about 78.3 percent.

Train-Test Split for Regression

We will demonstrate how to use the train-test split to evaluate a random forest algorithm on the housing dataset.

The housing dataset is a standard machine learning dataset composed of 506 rows of data with 13 numerical input variables and a numerical target variable.

The dataset involves predicting the house price given details of the house’s suburb in the American city of Boston.

No need to download the dataset; we will download it automatically as part of our worked examples.

The example below downloads and loads the dataset as a Pandas DataFrame and summarizes the shape of the dataset.

Running the example confirms the 506 rows of data and 13 input variables and single numeric target variables (14 in total).

We can now evaluate a model using a train-test split.

First, the loaded dataset must be split into input and output components.

Next, we can split the dataset so that 67 percent is used to train the model and 33 percent is used to evaluate it. This split was chosen arbitrarily.

We can then define and fit the model on the training dataset.

Then use the fit model to make predictions and evaluate the predictions using the mean absolute error (MAE) performance metric.

Tying this together, the complete example is listed below.

Running the example first loads the dataset and confirms the number of rows in the input and output elements.

Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.

The dataset is split into train and test sets and we can see that there are 339 rows for training and 167 rows for the test set.

Finally, the model is evaluated on the test set and the performance of the model when making predictions on new data is a mean absolute error of about 2.211 (thousands of dollars).

Further Reading

This section provides more resources on the topic if you are looking to go deeper.

Summary

In this tutorial, you discovered how to evaluate machine learning models using the train-test split.

Specifically, you learned:

  • The train-test split procedure is appropriate when you have a very large dataset, a costly model to train, or require a good estimate of model performance quickly.
  • How to use the scikit-learn machine learning library to perform the train-test split procedure.
  • How to evaluate machine learning algorithms for classification and regression using the train-test split.

Do you have any questions?
Ask your questions in the comments below and I will do my best to answer.

Discover Fast Machine Learning in Python!

Master Machine Learning With Python

Develop Your Own Models in Minutes

...with just a few lines of scikit-learn code

Learn how in my new Ebook:
Machine Learning Mastery With Python

Covers self-study tutorials and end-to-end projects like:
Loading data, visualization, modeling, tuning, and much more...

Finally Bring Machine Learning To
Your Own Projects

Skip the Academics. Just Results.

See What's Inside

77 Responses to Train-Test Split for Evaluating Machine Learning Algorithms

  1. Avatar
    Usman July 24, 2020 at 10:57 am #

    Excellent tutorial. Thanks

  2. Avatar
    Yuthika July 24, 2020 at 2:22 pm #

    Well explained tutorial ????

  3. Avatar
    bala July 24, 2020 at 6:51 pm #

    hi jason,its wonderful explanation about train-test-split function i ever heard.i just made some modification to the code to find the exact point at which the accuracy is maximum and also to find additional insights.

  4. Avatar
    Prasenjit Mondal July 25, 2020 at 1:51 pm #

    Awesome

  5. Avatar
    hanan Alsaiari July 26, 2020 at 3:53 am #

    It is very useful.thank you so much.

    i m looking for implementation Stackedautoencoder (high level denoising) in python regression problem please .

    • Avatar
      Jason Brownlee July 26, 2020 at 6:24 am #

      You’re welcome.

      Thanks for the suggestion, I hope to write about the topic in the future.

      • Avatar
        hanan Alsaiari July 27, 2020 at 2:07 am #

        thank you so much for your quick replay. what is the best way to communicate with I have some question for my projects please.

  6. Avatar
    S AYISHA August 3, 2020 at 4:15 pm #

    I have a doubt. How to split data with date as X varible. Because, svr model doestn’t fit for date variable. What do we do in such case?

    • Avatar
      Jason Brownlee August 4, 2020 at 6:35 am #

      Typically we remove the date from the data prior to modeling.

  7. Avatar
    sukh August 19, 2020 at 12:05 am #

    sir if we add softmax function in binary classification for classification layer over sigmoid function?is there any benefits of softmax function over sigmoid?

    • Avatar
      Jason Brownlee August 19, 2020 at 6:02 am #

      No, not for binary classification. It would probably be slightly less efficient.

  8. Avatar
    Dina September 17, 2020 at 1:30 am #

    When doing a PhD, do you use random_state with train test split?

    • Avatar
      Jason Brownlee September 17, 2020 at 6:49 am #

      Sorry, I don’t understand. What does phd have to do with train/test split?

      You can design the experiments anyway you like, as long as you justify your decisions.

      • Avatar
        D September 17, 2020 at 8:33 pm #

        I meant when splitting the data, if I use Random state, then my results will always be the same.

        However, if Random state =None, then every time I will get a different result for the classifier.

        I can use random state=1234 and my results are over 80%
        using random state=none, it can range from 60-80

        Is it common practice, for Phd students, to set random state to a number that gives you the best results?

        • Avatar
          Jason Brownlee September 18, 2020 at 6:45 am #

          Correct.

          It depends on the specifics of your project, not on what degree you’re doing. As I said, you can choose any methodology you like as long as you justify it.

  9. Avatar
    Juliana October 2, 2020 at 1:33 pm #

    Hi Jason, I don’t want to split the data into train and test. I want to train ALL the records against my dataset. How to code it using python as test_size has to be greater than 0? Thanks.

  10. Avatar
    HSA October 27, 2020 at 5:10 am #

    How to make sure that training examples are not repeated in testing examples?

    • Avatar
      Jason Brownlee October 27, 2020 at 6:50 am #

      The train test split does this for you, as long as each row is unique.

  11. Avatar
    Sameer RS October 28, 2020 at 7:24 pm #

    Hi Jason,

    Nice & informative article. Thanks for sharing your thoughts regarding the same & giving more clarity to the topic.

    However, with reference to the above topic, I have few doubts as follows:

    a) Nowadays there is a trend being observed that dataset is split into 3 parts – Train set, Test Set & Validation Set.

    However, a cross question—is this 3-way split necessary or will a 2-way split simply suffice?

    i) If the answer is in affirmative, why do you do so and what are the advantages of a 3-way split over a 2-way split?

    ii) If your reply is in the negative, what are the reasons for avoiding a 3-way split of the given dataset(s)?

    b) Is a 3-way split superior to a 2-way spit? Kindly explain.

    c) Does a 3-way split result in the following:—

    i) Is there loss of the original data?

    ii) Does it result in a Bias & Variance Tradeoff ie. over-fitting of the model?

    d) What is this k-fold validation procedure? Is there any recommended reference material that you suggest?

  12. Avatar
    Sameer RS October 28, 2020 at 8:48 pm #

    One more doubt—

    Why is it that in Python, we split the datasets into X_train, X test, y-train, y-test?

    Am asking this query specifically—as all this while I have worked with R tool. In R, simply you divide the dataset into train-set & test-set?

    Similarly, I believe you can do the same in Python by using & thereafter executing the following code viz.:

    train_set,test_set = train_test_split(dataset_name, test_size = 0.3)
    print(train_set)

    However,why or for what reasons is the one stated by you in the aforesaid tutorial favoured or rather extensively used??? I can see a replica of similar codes being used in other websites also.

    What is wrong with the above code or what the limitations involved?

    Need to understand the logic and reasons behind this.

    • Avatar
      Jason Brownlee October 29, 2020 at 8:02 am #

      We train the model on the training set and evaluate its performance on the test set.

      The limitation of train/test split is that it has a high varaince. This can be over come using k-fold cross validation.

  13. Avatar
    Marios Png November 14, 2020 at 4:20 am #

    Is the concept of the random split into train-test samples applicable for the occasions where time step is used in order to give artificially a 3rd dimension to our data set like in Convolutional or Recurrent Neural Networks? Or in this case is more preferable to have a sequential split into train-test samples?

  14. Avatar
    Ran Da January 19, 2021 at 3:54 am #

    Hello again, my data contain 63 features and 70 rows. when i use it with linear regression without “Train test split” i get an MAE value “0.3”. And with “Train test split” i get an MAE value “4.9”.
    so should i split the dataset? the MAE value “0.3” is consiered as incorrect(overfiting)?

    • Avatar
      Jason Brownlee January 19, 2021 at 6:38 am #

      No, it may be the split makes the two sets too small to be useful.

      Perhaps use k-fold cross-validation instead.

      • Avatar
        Ran Da January 19, 2021 at 8:18 am #

        so the result is consiered correct even if am not using training data?

        • Avatar
          Jason Brownlee January 19, 2021 at 9:43 am #

          Sorry, I don’t understand what you mean by correct? Perhaps you can elaborate?

          • Avatar
            Ran Da January 19, 2021 at 9:57 am #

            i mean the MAE value “0.3” is not considering as an overfiting? and i sholdn’t use train test split(training the dataset)

          • Avatar
            Jason Brownlee January 19, 2021 at 10:06 am #

            Generally, if the model performs better on the training set than the test set, and test set performance is not skillful, the model might be overfitting.

            Perhaps this will help:
            https://machinelearningmastery.com/overfitting-machine-learning-models/

  15. Avatar
    khansa Rana April 23, 2021 at 4:58 am #

    how to download the train & test datasets after split the dataset???

  16. Avatar
    Kamal Silva June 17, 2021 at 7:08 pm #

    Hello Jason,
    Great article! I have a small question. At which phase should we need to do the splitting according to the data mining process? Is it after preprocessing or after doing the transformations.? Does it have any effect on data leakage?

  17. Avatar
    Nour June 22, 2021 at 8:10 pm #

    if i want to know the indexes of x_test and x_train in the original file, what is the code ?
    (x_test has elements 6,7,9,1) I want to know these indexes from dataset file.
    Thanks

    • Avatar
      Jason Brownlee June 23, 2021 at 5:36 am #

      The train/test split will return arrays of rows indexes directly.

  18. Avatar
    Dylan June 27, 2021 at 3:26 am #

    Hi Jason, I’ve recently applied a non-standard method for model evaluation. The method has a problem of being computationally expensive, but I’m having trouble convincing myself that standard methods like are sufficient. I was hoping you can provide input.

    My goal is to prove that the addition of a new feature yields performance improvements. Since data splits influences results, I generate k train/test splits. The “train” split will be split into a training and validation set by algorithm and it will use one of the methods that you described in your article. The test set is a hold out set. The key difference is that I evaluate my model on multiple test sets.

    The equivalent splits are performed on the original dataset and the one with the new features. K models are trained with the same parameters to produce the baseline and the model with the new feature. These models are evaluated against their corresponding test set. I then run a t-test on the distribution of evaluation metrics to demonstrate whether or not there is an improvement.

    I am not convinced that a method like k-fold cross validation can guarantee that a test split might by chance favor one scenario. Thus, I applied the method described above. I was hoping you can explain why it does or validate my method.

    Thank you!

  19. Avatar
    Orange August 17, 2021 at 12:32 am #

    Thank you for another helpful article.
    In case of unsupervised approach, would stratify y work to balance both train and test datasets? or is there an alternative approach?

    • Avatar
      Adrian Tam August 17, 2021 at 8:00 am #

      Yes, it works. Indeed stratify is also one way to deal with imbalanced datasets.

  20. Avatar
    Deepti August 29, 2021 at 1:05 am #

    Excellent Tutorial.
    But while executing:
    X = preprocessing.StandardScalar().fit(X).transform(X) #.astype(float))
    X[0:5]
    Getting Error:
    AttributeError: module ‘sklearn.preprocessing’ has no attribute ‘StandardScalar’

    Another Can’t convert string to float

    • Avatar
      Adrian Tam August 29, 2021 at 12:27 pm #

      You spelled it wrong. StandardScaler with a “e”.

  21. Avatar
    Prem September 15, 2021 at 11:28 am #

    Hi Jason,
    Thanks for this tutorial,

    1. Do we have to do the split before doing normalisation or after, which is normalisation only on the training data and use the scalar on the test data?
    I think doing only on the training data is correct.

    2. If doing only on the training data, how to do stratified split so that all string column values are evenly distributed on both train and test dataframes.
    Found this in stackoverflow https://stackoverflow.com/a/51525992/11053801 whether is this good to do?

    Thanks

    • Avatar
      Adrian Tam September 16, 2021 at 12:48 am #

      Normalization means you applied a scaler. This should be done for all data you ever feed into the model. You can fit the scaler with training data only but this fitted scaler should be reused for all input.

      • Avatar
        Prem September 20, 2021 at 9:11 am #

        Thanks for the answer.

        How to ensure the test, train split has all possible unique values of string columns in both X_Train and X_test?

        • Avatar
          Adrian Tam September 20, 2021 at 2:36 pm #

          Not sure about that. What are the string columns you’re asking?

          • Avatar
            Prem September 21, 2021 at 9:10 am #

            Categorical Columns, If a particular column has 10 unique values, we have to ensure train and test data to have all 10 values,

            Instead of doing stratify in train_test_split based on Target column, May I know how to do based on entire dataset?

  22. Avatar
    Amnah October 3, 2021 at 7:18 am #

    Hi, I build many deep learning classification models but I didn’t know how to identify the input shape for the models after I split my dataset using train_test_spilt(). also, I wanna ask if the input shape differs from one model to another?? I try (x_train.shape[1]) , (x_train.shape[1:]), (x_train.shape[0],x_train.shape[1]), (x_train.shape[1],x_train.shape[2]), and some numbers but all of theme has a problem when I tried to fit the model.

  23. Avatar
    Gloria November 21, 2021 at 10:26 pm #

    Hi Adrian, thanks for this tutorial. I’d like to ask.
    Stratified Train-Test Splits require us to randomize the order of the data. what if I don’t want to shuffle it?
    In other words, keep dividing the data according to the percentage we want in each label but keep the data in order.
    Thank you.

  24. Avatar
    Alina Devkota January 22, 2022 at 9:06 pm #

    How do we know for how many epochs should we train the model in such a setting?

  25. Avatar
    Marco February 22, 2022 at 7:23 am #

    Thank you very much for this helpful article.

    I have a dataset made of different measurements of 2 signals and all the measurements have the same length, therefore each input sample is a matrix nx2. To each input matrix one scalar output should correspond.
    I would please like to ask how to create a dataset that pairs each input matrix to the corresponding real number output? and what would be most suitable machine learning method for this kind of problem?

    Thank you again

    • Avatar
      James Carmichael February 23, 2022 at 12:34 pm #

      Hi Marco…Please clarify the goals of your machine learning model so that I may better assist you.

      • Avatar
        Marco February 23, 2022 at 10:51 pm #

        Hi James,

        thank you for your reply. I am trying to predict the phase shift between the two signals.

      • Avatar
        Marco February 23, 2022 at 10:54 pm #

        Hi James, thank you for your reply.. I am trying to predict the phase shift between the 2 signals

  26. Avatar
    Brij Bhushan March 14, 2022 at 3:16 pm #

    I genuinely enjoy reading your articles. Your blog provided us useful information. You have done an outstanding job.

  27. Avatar
    Macduff Olusa April 19, 2022 at 11:02 am #

    This is the most lucid ML article I have ever read. Thank you for taking you time to write.

    • Avatar
      James Carmichael April 20, 2022 at 6:56 am #

      Thank you for the feedback Macduff!

  28. Avatar
    Jishan Ahmed May 15, 2022 at 12:10 pm #

    Excellent tutorial! However, is it wise to stratify the continuous y (target) variable when you split your training and testing data from the total sample in regression setting? In regression setting, we can not even use sklearn stratify=y' argument in sklearn train_test_split` function. I appreciate your time. Thanks!

  29. Avatar
    Kondal May 16, 2022 at 6:28 am #

    Hi James,

    I have following doubt when decission tree support 1 split i.e predictor and target to evaluate data accuracy whereas randomforest is not supporting as we need to split it into x test x train and y test and y train may I know the reason behind these two split methods also explain if any model based split technique do we need to follow

  30. Avatar
    Ben November 12, 2022 at 10:49 pm #

    Hi. Thank you for the wonderful tutorial.

    May I know what is the sequence of steps if we want to standardize and resample the dataset before training?
    1) Split > Standardize > Resample
    2) Split > Resample > Standardize
    3) Standardize > Split > Resample

  31. Avatar
    Vedant Gandhi March 4, 2023 at 9:56 pm #

    TypeError: Indexing elements must be in increasing order
    I’m getting this error when I’m using the function, could you help me with it?

    with h5py.File(path,”r”) as hdf:
    X_train, X_test = train_test_split( hdf[‘X’], test_size=0.20, random_state=33)

  32. Avatar
    aya July 10, 2023 at 8:29 am #

    if we have two data frames for train and testing, and we want to split them, how to do that.
    Note: we want to use these files separately, so we won’t merge two files and use train_test split.
    we want to do a split in this case.

  33. Avatar
    Sibutha February 26, 2024 at 3:36 am #

    After reading this, I feel I can takeon Machine Learning. I was a bit intimidated initial. Now its all clarified. Thanks dude

    • Avatar
      James Carmichael February 26, 2024 at 4:57 am #

      Thank you for the feedback Sibutha! We appreciate it!

Leave a Reply