How To Create an Algorithm Test Harness From Scratch With Python

We cannot know which algorithm will be best for a given problem.

Therefore, we need to design a test harness that we can use to evaluate different machine learning algorithms.

In this tutorial, you will discover how to develop a machine learning algorithm test harness from scratch in Python.

After completing this tutorial, you will know:

  • How to implement a train-test algorithm test harness.
  • How to implement a k-fold cross-validation algorithm test harness.

Kick-start your project with my new book Machine Learning Algorithms From Scratch, including step-by-step tutorials and the Python source code files for all examples.

Let’s get started.

  • Update Jan/2017: Changed the calculation of fold_size in cross_validation_split() to always be an integer. Fixes issues with Python 3.
  • Update Mar/2018: Added alternate link to download the dataset as the original appears to have been taken down.
  • Update Aug/2018: Tested and updated to work with Python 3.6.
How To Create an Algorithm Test Harness From Scratch With Python

How To Create an Algorithm Test Harness From Scratch With Python
Photo by Chris Meller, some rights reserved.

Description

A test harness provides a consistent way to evaluate machine learning algorithms on a dataset.

It involves 3 elements:

  1. The resampling method to split-up the dataset.
  2. The machine learning algorithm to evaluate.
  3. The performance measure by which to evaluate predictions.

The loading and preparation of a dataset is a prerequisite step that must have been completed prior to using the test harness.

The test harness must allow for different machine learning algorithms to be evaluated, whilst the dataset, resampling method and performance measures are kept constant.

In this tutorial, we are going to demonstrate the test harnesses with a real dataset.

The dataset used is the Pima Indians diabetes dataset. It contains 768 rows and 9 columns. All of the values in the file are numeric, specifically floating point values.

The Zero Rule algorithm will be evaluated as part of the tutorial. The Zero Rule algorithm always predicts the class that has the most observations in the training dataset.

Tutorial

This tutorial is broken down into two main sections:

  1. Train-Test Algorithm Test Harness.
  2. Cross-Validation Algorithm Test Harness.

These test harnesses will give you the foundation that you need to evaluate a suite of machine learning algorithms on a given predictive modeling problem.

1. Train-Test Algorithm Test Harness

The train-test split is a simple resampling method that can be used to evaluate a machine learning algorithm.

As such, it is a good starting point for developing a test harness.

We can assume the prior development of a function to split a dataset into train and test sets and a function to evaluate the accuracy of a set of predictions.

We need a function that can take a dataset and an algorithm and return a performance score.

Below is a function named evaluate_algorithm() that achieves this. It takes 3 fixed arguments including the dataset, the algorithm function and the split percentage for the train-test split.

First, the dataset is split into train and test elements. Next, a copy of the test set is made and each output value is cleared by setting it to the None value, to prevent the algorithm from cheating accidentally.

The algorithm provided as a parameter is a function that expects the train and test datasets on which to prepare and then make predictions. The algorithm may require additional configuration parameters. This is handled by using the variable arguments *args in the evaluate_algorithm() function and passing them on to the algorithm function.

The algorithm function is expected to return a list of predictions, one for each row in the training dataset. These are compared to the actual output values from the unmodified test dataset by the accuracy_metric() function.

Finally, the accuracy is returned.

The evaluation function does make some strong assumptions, but they can easily be changed if needed.

Specifically, it assumes that the last row in the dataset is always the output value. A different column could be used. The use of the accuracy_metric() assumes that the problem is a classification problem, but this could be changed to mean squared error for regression problems.

Let’s piece this together with a worked example.

We will use the Pima Indians diabetes dataset and evaluate the Zero Rule algorithm.

The dataset was split into 60% for training the model and 40% for evaluating it.

Notice how the name of the Zero Rule algorithm zero_rule_algorithm_classification was passed as an argument to the evaluate_algorithm() function. You can see how this test harness may be used again and again with different algorithms.

Running the example above prints out the accuracy of the model.

2. Cross-Validation Algorithm Test Harness

Cross-validation is a resampling technique that provides more reliable estimates of algorithm performance on unseen data.

It requires the creation and evaluation of k models on different subsets of your data, and such is more computationally expensive. Nevertheless, it is the gold standard for evaluating machine learning algorithms.

As in the previous section, we need to create a function that ties together the resampling method, the evaluation of the algorithm on the dataset and the performance calculation method.

Unlike above, the algorithm must be evaluated on different subsets of the dataset many times. This means we need additional loops within our evaluate_algorithm() function.

Below is a function that implements algorithm evaluation with cross-validation.

First, the dataset is split into n_folds groups called folds.

Next, we loop giving each fold an opportunity to be held out of training and used to evaluate the algorithm. A copy of the list of folds is created and the held out fold is removed from this list. Then the list of folds is flattened into one long list of rows to match the algorithms expectation of a training dataset. This is done using the sum() function.

Once the training dataset is prepared the rest of the function within this loop is as above. A copy of the test dataset (the fold) is made and the output values are cleared to avoid accidental cheating by algorithms. The algorithm is prepared on the train dataset and makes predictions on the test dataset. The predictions are evaluated and stored in a list.

Unlike the train-test algorithm test harness, a list of scores is returned, one for each cross-validation fold.

Although slightly more complex in code and slower to run, this function provides a more robust estimate of algorithm performance.

We can tie all of this together with a complete example on the diabetes dataset with the Zero Rule algorithm.

A total of 5 cross validation folds were used to evaluate the Zero Rule Algorithm. As such, 5 scores were returned from the evaluate_algorithm() algorithm.

Running this example both prints these list of scores calculated and prints the mean score.

You now have two different test harnesses that you can use to evaluate your own machine learning algorithms.

Extensions

This section lists extensions to this tutorial that you may wish to consider.

  • Parameterized Evaluation. Pass in the function used to evaluate predictions, allowing you to seamlessly work with regression problems.
  • Parameterized Resampling. Pass in the function used to calculate resampling splits, allowing you to easily switch between the train-test and cross-validation methods.
  • Standard Deviation Scores. Calculate the standard deviation to get an idea of the spread of scores when evaluating algorithms using cross-validation.

Did you try any of these extensions?
Share your experiences in the comments below.

Review

In this tutorial, you discovered how to create a test harness from scratch to evaluate your machine learning algorithms.

Specifically, you now know:

  • How to implement and use a train-test algorithm test harness.
  • How to implement and use a cross-validation algorithm test harness.

Do you have any questions?
Ask your questions in the comments below and I will do my best to answer.

Discover How to Code Algorithms From Scratch!

Machine Learning Algorithms From Scratch

No Libraries, Just Python Code.

...with step-by-step tutorials on real-world datasets

Discover how in my new Ebook:
Machine Learning Algorithms From Scratch

It covers 18 tutorials with all the code for 12 top algorithms, like:
Linear Regression, k-Nearest Neighbors, Stochastic Gradient Descent and much more...

Finally, Pull Back the Curtain on
Machine Learning Algorithms

Skip the Academics. Just Results.

See What's Inside

8 Responses to How To Create an Algorithm Test Harness From Scratch With Python

  1. Avatar
    Timothy Oriedo October 24, 2016 at 5:39 am #

    Thank you do for this will definitely get my start in machine learning.

  2. Avatar
    Thineswaran October 24, 2016 at 4:18 pm #

    What’s the difference between this & using the built-in cross_val_score in Python sklearn?

    • Avatar
      Jason Brownlee October 25, 2016 at 8:22 am #

      Great question.

      Use sklearn in practice.

      If you want to learn how all these methods work from first principles, try implementing them yourself.

  3. Avatar
    Anand October 30, 2016 at 6:13 am #

    How about using the weka tool to harness the algorithm? As you said above, is it the same to that of using built-in function in sklearn? which stops at practicing and not to learn principles.

  4. Avatar
    jms.plmr@gmail.com February 13, 2018 at 10:58 pm #

    In my implementatin get the following
    line 163, in k_cross_validate
    train_set.remove(fold)
    ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()

    • Avatar
      Jason Brownlee February 14, 2018 at 8:20 am #

      Sorry, I’ve not seen this error before. Are you using Python 2?

Leave a Reply