Machine Learning Algorithm Recipes in scikit-learn

You have to get your hands dirty.

You can read all of the blog posts and watch all the videos in the world, but you’re not actually going to start really get machine learning until you start practicing.

The scikit-learn Python library is very easy to get up and running. Nevertheless I see a lot of hesitation from beginners looking get started. In this blog post I want to give a few very simple examples of using scikit-learn for some supervised classification algorithms.

mean-shift clustering algorithm

Scikit-Learn Recipes

You don’t need to know about and use all of the algorithms in scikit-learn, at least initially, pick one or two (or a handful) and practice with only those.

In this post you will see 5 recipes of supervised classification algorithms applied to small standard datasets that are provided with the scikit-learn library.

The recipes are principled. Each example is:

  • Standalone: Each code example is a self-contained, complete and executable recipe.
  • Just Code: The focus of each recipe is on the code with minimal exposition on machine learning theory.
  • Simple: Recipes present the common use case, which is probably what you are looking to do.
  • Consistent: All code example are presented consistently and follow the same code pattern and style conventions.

The recipes do not explore the parameters of a given algorithm. They provide a skeleton that you can copy and paste into your file, project or python REPL and start to play with immediately.

These recipes show you that you can get started practicing with scikit-learn right now. Stop putting it off.

Need help with Machine Learning in Python?

Take my free 2-week email course and discover data prep, algorithms and more (with sample code).

Click to sign-up now and also get a free PDF Ebook version of the course.

Start Your FREE Mini-Course Now!

Logistic Regression

Logistic regression fits a logistic model to data and makes predictions about the probability of an event (between 0 and 1).

This recipe shows the fitting of a logistic regression model to the iris dataset. Because this is a mutli-class classification problem and logistic regression makes predictions between 0 and 1, a one-vs-all scheme is used (one model per class).

For more information see the API reference for Logistic Regression for details on configuring the algorithm parameters. Also see the Logistic Regression section of the user guide.

Naive Bayes

Naive Bayes uses Bayes Theorem to model the conditional relationship of each attribute to the class variable.

This recipe shows the fitting of an Naive Bayes model to the iris dataset.

For more information see the API reference for the Gaussian Naive Bayes for details on configuring the algorithm parameters. Also see the Naive Bayes section of the user guide.

k-Nearest Neighbor

The k-Nearest Neighbor (kNN) method makes predictions by locating similar cases to a given data instance (using a similarity function) and returning the average or majority of the most similar data instances. The kNN algorithm can be used for classification or regression.

This recipe shows use of the kNN model to make predictions for the iris dataset.

For more information see the API reference for the k-Nearest Neighbor for details on configuring the algorithm parameters. Also see the k-Nearest Neighbor section of the user guide.

Classification and Regression Trees

Classification and Regression Trees (CART) are constructed from a dataset by making splits that best separate the data for the classes or predictions being made. The CART algorithm can be used for classification or regression.

This recipe shows use of the CART model to make predictions for the iris dataset.

For more information see the API reference for CART for details on configuring the algorithm parameters. Also see the Decision Tree section of the user guide.

Support Vector Machines

Support Vector Machines (SVM) are a method that uses points in a transformed problem space that best separate classes into two groups. Classification for multiple classes is supported by a one-vs-all method. SVM also supports regression by modeling the function with a minimum amount of allowable error.

This recipe shows use of the SVM model to make predictions for the iris dataset.

For more information see the API reference for SVM for details on configuring the algorithm parameters. Also see the SVM section of the user guide.

Summary

In this post you have seen 5 self-contained recipes demonstrating some of the most popular and powerful supervised classification problems.

Each example is less than 20 lines that you can copy and paste and start using scikit-learn, right now. Stop reading and start practicing. Pick one recipe and run it, then start to play with the parameters and see what effect that has on the results.


Frustrated With Python Machine Learning?

Master Machine Learning With Python

Develop Your Own Models in Minutes

…with just a few lines of scikit-learn code

Discover how in my new Ebook:
Machine Learning Mastery With Python

Covers self-study tutorials and end-to-end projects like:
Loading data, visualization, modeling, tuning, and much more…

Finally Bring Machine Learning To
Your Own Projects

Skip the Academics. Just Results.

Click to learn more.


15 Responses to Machine Learning Algorithm Recipes in scikit-learn

  1. DR Venugopala Rao Manneni April 7, 2016 at 5:31 pm #

    Thanks for these Jason. Can you also please give the same for Neural networks (MLP)

  2. Ajinkya June 12, 2016 at 8:48 am #

    Thanks for this informative tutorial.
    Can you please explain how logistic regression is used for classification where more than 2 classes are involved.?
    Thanks

    • Jason Brownlee June 14, 2016 at 8:14 am #

      Great question Ajinkya.

      Generally, you can take an algorithm designed for binary (two-class) classification and turn it into a multi-class classification algorithm by using the one-vs-all meta algorithm. You create n models, where n is the number of classes. Each model makes a prediction to provide a vector of predictions and the final prediction can be taken as the model for the class that had the highest probability.

      This can be used with logistic regression and is very popular with support vector machines.

      More on the one-vs-all meta algorithm here:
      https://en.wikipedia.org/wiki/Multiclass_classification

  3. Nicolas November 23, 2016 at 1:12 am #

    Hey

    Thank you very much for these helpful examples! I searched a lot until I found this website. You actually saved me a lot of time and nerves with doing an assignment for my ML course at my university 🙂

    Keep up the great work!

  4. Gill Bates February 11, 2017 at 3:18 am #

    Dear Jason,
    Great job.
    Can you please show how to implement other algorithms or “how to catch fish”?
    Tks.

  5. lalit April 6, 2017 at 9:32 pm #

    Test data should not be used for training. Here you are using full training data as test data which is wrong

    • Jason Brownlee April 9, 2017 at 2:39 pm #

      Yes, I agree. These are just examples on how to fit models in sklearn.

  6. Brian Tremaine July 28, 2017 at 3:17 am #

    Thank you for this tutorial, very helpfull.

    I have run the MNIST character recognition using Naive Bayes (GaussianNB) and the results were very poor compared to nearest neighbors. Is the an sklearn function for Bayes that uses priors? I’ve searched but haven’t found anything,

    Thanks,
    Brian

    • Jason Brownlee July 28, 2017 at 8:33 am #

      I would expect that naive Bayes in sklearn would use priors.

      The only time priors are dropped is when they add nothing to the equation (e.g. both classes have the same number of obs).

  7. Jarrell R Dunson October 24, 2017 at 6:53 am #

    Question…I’m trying the code for sklearn.naive_bayes import GaussianNB

    but this doesn’t seem to work from Python 3.5 or 3.6 …

    Is this only to run in Python 2?

    • Jason Brownlee October 24, 2017 at 3:57 pm #

      No. It works with py2 and py3.

      Perhaps double check your version of sklearn?

  8. Jarrell R Dunson October 25, 2017 at 12:51 am #

    Thanks… upgraded sklearn, and it works

Leave a Reply