How To Compare Machine Learning Algorithms in Python with scikit-learn

It is important to compare the performance of multiple different machine learning algorithms consistently.

In this post you will discover how you can create a test harness to compare multiple different machine learning algorithms in Python with scikit-learn.

You can use this test harness as a template on your own machine learning problems and add more and different algorithms to compare.

Let’s get started.

How To Compare Machine Learning Algorithms in Python with scikit-learn

How To Compare Machine Learning Algorithms in Python with scikit-learn
Photo by Michael Knight, some rights reserved.

Choose The Best Machine Learning Model

How do you choose the best model for your problem?

When you work on a machine learning project, you often end up with multiple good models to choose from. Each model will have different performance characteristics.

Using resampling methods like cross validation, you can get an estimate for how accurate each model may be on unseen data. You need to be able to use these estimates to choose one or two best models from the suite of models that you have created.

Compare Machine Learning Models Carefully

When you have a new dataset, it is a good idea to visualize the data using different techniques in order to look at the data from different perspectives.

The same idea applies to model selection. You should use a number of different ways of looking at the estimated accuracy of your machine learning algorithms in order to choose the one or two to finalize.

A way to do this is to use different visualization methods to show the average accuracy, variance and other properties of the distribution of model accuracies.

In the next section you will discover exactly how you can do that in Python with scikit-learn.

Need help with Machine Learning in Python?

Take my free 2-week email course and discover data prep, algorithms and more (with sample code).

Click to sign-up now and also get a free PDF Ebook version of the course.

Start Your FREE Mini-Course Now!

Compare Machine Learning Algorithms Consistently

The key to a fair comparison of machine learning algorithms is ensuring that each algorithm is evaluated in the same way on the same data.

You can achieve this by forcing each algorithm to be evaluated on a consistent test harness.

In the example below 6 different algorithms are compared:

  1. Logistic Regression
  2. Linear Discriminant Analysis
  3. K-Nearest Neighbors
  4. Classification and Regression Trees
  5. Naive Bayes
  6. Support Vector Machines

The problem is a standard binary classification dataset from the UCI machine learning repository called the Pima Indians onset of diabetes problem. The problem has two classes and eight numeric input variables of varying scales.

The 10-fold cross validation procedure is used to evaluate each algorithm, importantly configured with the same random seed to ensure that the same splits to the training data are performed and that each algorithms is evaluated in precisely the same way.

Each algorithm is given a short name, useful for summarizing results afterward.

Running the example provides a list of each algorithm short name, the mean accuracy and the standard deviation accuracy.

The example also provides a box and whisker plot showing the spread of the accuracy scores across each cross validation fold for each algorithm.

Compare Machine Learning Algorithms

Compare Machine Learning Algorithms

From these results, it would suggest that both logistic regression and linear discriminate analysis are perhaps worthy of further study on this problem.

Summary

In this post you discovered how to evaluate multiple different machine learning algorithms on a dataset in Python with scikit-learn.

You learned how to both use the same test harness to evaluate the algorithms and how to summarize the results both numerically and using a box and whisker plot.

You can use this recipe as a template for evaluating multiple algorithms on your own problems.

Do you have any questions about evaluating machine learning algorithms in Python or about this post? Ask your questions in the comments below and I will do my best to answer them.


Frustrated With Python Machine Learning?

Master Machine Learning With Python

Develop Your Own Models in Minutes

…with just a few lines of scikit-learn code

Discover how in my new Ebook:
Machine Learning Mastery With Python

Covers self-study tutorials and end-to-end projects like:
Loading data, visualization, modeling, tuning, and much more…

Finally Bring Machine Learning To
Your Own Projects

Skip the Academics. Just Results.

Click to learn more.


24 Responses to How To Compare Machine Learning Algorithms in Python with scikit-learn

  1. Sundar June 1, 2016 at 8:32 pm #

    Great article! Just a quick question, what do you think is the best method? First optimise hyper-parameters and then compare the algorithms or vice versa. Thanks.

    • Jason Brownlee June 2, 2016 at 6:09 am #

      I recommend first spot checking algorithms and comparing them, followed by tuning.

      • Sundar June 2, 2016 at 7:04 pm #

        Thank you for your recommendation…

  2. dash June 3, 2016 at 7:37 am #

    Accuracy is easily readable but, in my opinion, it should be replaced by AUC: AUC is “consistent” and “more discriminating” than accuracy (Ling et al. 2003).

  3. Tom Anderson August 7, 2016 at 5:09 pm #

    In the code, “seed = 7” is hard coded. Shouldn’t we have a different seed for each fold?

    • Tom Anderson August 7, 2016 at 10:44 pm #

      To answer my own question, it appears that each model is trained and tested for all folds before moving on to the next model. The seed applies to the initial state so for the above, the 10 folds will all be different from one another, but the same data split for each of the 10 folds will be presented to each algorithm.

    • Jason Brownlee August 8, 2016 at 5:41 am #

      Yes Tom, the seed ensures we have the same sequence of random numbers. The random numbers ensure we have a random split of the data into the k folds.

  4. Guillaume Martin October 28, 2016 at 4:39 pm #

    Thank you for sharing.
    I had to tweak the code a little to make it work with scikit-learn 0.18.
    The cross_validation module is deprecated. It’s replaced by model_selection.
    The KFold parameters have changed too:
    0.17: cross_validation.KFold(n, n_folds=3, shuffle=False, random_state=None)
    0.18: model_selection.KFold(n_splits=3, shuffle=False, random_state=None)

    I have a question: is it ok to train the classifier before adding it to the list? Like:
    lr = LogisticRegression()
    lr.fit(X_train, y_train)
    models.append((‘LR’,lr))

    • Jason Brownlee October 29, 2016 at 7:36 am #

      Thanks Guillaume, I will look at updating the example. I have recently updated all of my books to support the new sklearn.

      No, the structure of the example fits and evaluates each model in turn. Your example essentially unrolls the for loop.

  5. Angela December 23, 2016 at 9:15 am #

    What a great article! I learned so much from your writing 🙂
    I also read your other article comparing different algorithms in R, and I noticed that you used a lot more techniques in that article:
    • Table Summary
    • Box and Whisker Plots
    • Density Plots
    • Dot Plots
    • Parallel Plots
    • Scatterplot Matrix
    • Pairwise xyPlots
    • Statistical Significance Tests
    I was wondering why you did not provide the same techniques in this Python article? Is it because these functions are more readily available in R?
    Thanks so much!

    • Jason Brownlee December 23, 2016 at 10:18 am #

      Great question.

      These capabilities are available in Python, but are spread through the scipy and statsmodels libs rather than directly available in sklearn.

      R is a more technical platform for more technical types, I tend to go into more detail in those examples.

      Is this something you would like to see more of Angela?

  6. Suleyman Sahal December 31, 2016 at 9:14 am #

    Hi Jason. Thank you for these great articles. I also read this article of yours (https://goo.gl/v71GPT). What I wonder is the proper validation method. Should we conduct k-fold or repeated n*k-fold cross validation? I recently read a journal article where researchers compare around 50 models under 5*2-folds setting, suggesting it is more robust. How should we proceed while comparing models?

    • Jason Brownlee January 1, 2017 at 5:23 am #

      Hi Suleyman,

      Using k-fold cross validation is a gold standard. The specific configuration is problem specific, but common configurations of 3,5, 10 do well on many datasets.

      On very large datasets, a train-test split may be sufficient. For complex or small datasets, if you have the resources, repeated k-fold cross validation is preferred. Often, we would like to use repeated k-fold cross validation, but the computational expense is too high.

      There is no “best”, just lots of options to tune for your given problem.

      How do you choose?

      Balance your constraints (amount of data, resources, time, ..) against your requirements (robustness of result).

      • Suleyman Sahal January 1, 2017 at 5:54 am #

        Thank you Jason. That is what I have been doing.

  7. Othmane January 5, 2017 at 3:16 am #

    Hi Jason,

    Thanks a lot for this good article.
    Could you please give some interpretations of the standard deviation values?
    Especially regarding overfitting.
    I thought that in case we have a small standard deviation of the cv results, we will have more overfitting, but I am not sure about that.

    Thanks

    • Jason Brownlee January 5, 2017 at 9:37 am #

      Hi Othmane, great question.

      So standard deviation summarizes the spread of the distribution, assuming it is Gaussian.

      A tight spread may suggest overfitting or it may not, but we can only be sure by evaluating the model on a hold out dataset.

      One use of the stdev is to specify a confidence interval for the result. For example, the performance of the model is x% on unseen data, with the performance in the range of 2 standard deviations of that score (95th percentile).

      This might help:
      https://en.wikipedia.org/wiki/68%E2%80%9395%E2%80%9399.7_rule

      Thanks for the prompt, this topic of interpreting results is not discussed enough. I plan to write more about it.

  8. Dhrubajit January 31, 2017 at 8:20 pm #

    Hi, from the boxplot, we get LR and LDA to have higher accuracy, so we select them as our models.
    So now, can I apply train_test_split to check the RMSE and the accuracy for the testing data using both these models. Whichever gives the best result, I will make that my final model?

    • Jason Brownlee February 1, 2017 at 10:47 am #

      Hi Dhrubajit,

      There are many ways to choose a final model. Often we prefer a model with better average performance rather than better absolute performance, this is because of the natural variance in the estimation of performance of the models on unseen data.

      Once you choose a final model, train it on all available data and you can start to use it to make predictions.

  9. Peter February 3, 2017 at 11:26 pm #

    Great article! Could you please explain me why this program doesn’t work when Y is float?

    • Jason Brownlee February 4, 2017 at 10:01 am #

      Hi Peter,

      Classification problems assume the outcome is a label.

  10. Edmond Sesay March 24, 2017 at 8:55 pm #

    great post! thanks for sharing

  11. Nitesh May 2, 2017 at 4:04 pm #

    Hi Jason,

    I have started learning and implementing Machine learning algorithms.
    One question – the above blog will tell us which Machine learning algorithm to go with. However, Should we ever check that if we are using Regression, how well the regression fits the data by checking
    Autocorrelation, Multicollinearity and normality.

    What I have learnt from reading blogs and articles that we all calculate score by using cross validation methodology, and then find out which would fit best. have not seen anyone following traditional ways such as checking Autocorrelation, Multicollinearity and normality. I might be wrong. Please throw some light on the same.
    THanks Nitesh

    • Jason Brownlee May 3, 2017 at 7:31 am #

      Yes, on time series, an understanding of the autocorrelation is practically required.

      When using a linear method, an idea of multicollinearity can be helpful.

      I would suggest this type of analysis before investigating models to get a better idea of the structure of your problem.

Leave a Reply