Naive Bayes Tutorial for Machine Learning

Naive Bayes is a very simple classification algorithm that makes some strong assumptions about the independence of each input variable.

Nevertheless, it has been shown to be effective in a large number of problem domains. In this post you will discover the Naive Bayes algorithm for categorical data. After reading this post, you will know.

  • How to work with categorical data for Naive Bayes.
  • How to prepare the class and conditional probabilities for a Naive Bayes model.
  • How to use a learned Naive Bayes model to make predictions.

This post was written for developers and does not assume a background in statistics or probability. Open a spreadsheet and follow along. If you have any questions about Naive Bayes ask in the comments and I will do my best to answer.

Let’s get started.

Naive Bayes Tutorial for Machine Learning

Naive Bayes Tutorial for Machine Learning
Photo by Beshef, some rights reserved.

Tutorial Dataset

The dataset is contrived. It describes two categorical input variables and a class variable that has two outputs.

We can convert this into numbers. Each input has only two values and the output class variable has two values. We can convert each variable to binary as follows:

Variable: Weather

  • sunny = 1
  • rainy = 0

Variable: Car

  • working = 1
  • broken = 0

Variable: Class

  • go-out = 1
  • stay-home = 0

Therefore, we can restate the dataset as:

This can make the data easier to work with in a spreadsheet or code if you are following along.

Get your FREE Algorithms Mind Map

Machine Learning Algorithms Mind Map

Sample of the handy machine learning algorithms mind map.

I've created a handy mind map of 60+ algorithms organized by type.

Download it, print it and use it. 

Download For Free


Also get exclusive access to the machine learning algorithms email mini-course.

 

 

Learn a Naive Bayes Model

There are two types of quantities that need to be calculated from the dataset for the naive Bayes model:

  • Class Probabilities.
  • Conditional Probabilities.

Let’s start with the class probabilities.

Calculate the Class Probabilities

The dataset is a two class problem and we already know the probability of each class because we contrived the dataset.

Nevertheless, we can calculate the class probabilities for classes 0 and 1 as follows:

  • P(class=1) = count(class=1) / (count(class=0) + count(class=1))
  • P(class=0) = count(class=0) / (count(class=0) + count(class=1))

or

  • P(class=1) = 5 / (5 + 5)
  • P(class=0) = 5 / (5 + 5)

This works out to be a probability of 0.5 for any given data instance belonging to class 0 or class 1.

Calculate the Conditional Probabilities

The conditional probabilities are the probability of each input value given each class value.

The conditional probabilities for the dataset can be calculated as follows:

Weather Input Variable

  • P(weather=sunny|class=go-out) = count(weather=sunny and class=go-out) / count(class=go-out)
  • P(weather=rainy|class=go-out) = count(weather=rainy and class=go-out) / count(class=go-out)
  • P(weather=sunny|class=stay-home) = count(weather=sunny and class=stay-home) / count(class=stay-home)
  • P(weather=rainy|class=stay-home) = count(weather=rainy and class=stay-home) / count(class=stay-home)

Plugging in the numbers we get:

  • P(weather=sunny|class=go-out) = 0.8
  • P(weather=rainy|class=go-out) = 0.2
  • P(weather=sunny|class=stay-home) = 0.4
  • P(weather=rainy|class=stay-home) = 0.6

Car Input Variable

  • P(car=working|class=go-out) = count(car=working and class=go-out) / count(class=go-out)
  • P(car=broken|class=go-out) = count(car=brokenrainy and class=go-out) / count(class=go-out)
  • P(car=working|class=stay-home) = count(car=working and class=stay-home) / count(class=stay-home)
  • P(car=broken|class=stay-home) = count(car=brokenrainy and class=stay-home) / count(class=stay-home)

Plugging in the numbers we get:

  • P(car=working|class=go-out) = 0.8
  • P(car=broken|class=go-out) = 0.2
  • P(car=working|class=stay-home) = 0.2
  • P(car=broken|class=stay-home) = 0.8

We now have every thing we need to make predictions using the Naive Bayes model.

Make Predictions with Naive Bayes

We can make predictions using Bayes Theorem.

P(h|d) = (P(d|h) * P(h)) / P(d)

Where:

  • P(h|d) is the probability of hypothesis h given the data d. This is called the posterior probability.
  • P(d|h) is the probability of data d given that the hypothesis h was true.
  • P(h) is the probability of hypothesis h being true (regardless of the data). This is called the prior probability of h.
  • P(d) is the probability of the data (regardless of the hypothesis).

In fact, we don’t need a probability to predict the most likely class for a new data instance. We only need the numerator and the class that gives the largest response, which will be the predicted output.

MAP(h) = max(P(d|h) * P(h))

Let’s take the first record from our dataset and use our learned model to predict which class we think it belongs.

weather=sunny, car=working

We plug the probabilities for our model in for both classes and calculate the response. Starting with the response for the output “go-out”. We multiply the conditional probabilities together and multiply it by the probability of any instance belonging to the class.

  • go-out = P(weather=sunny|class=go-out) * P(car=working|class=go-out) * P(class=go-out)
  • go-out = 0.8 * 0.8 * 0.5
  • go-out = 0.32

We can perform the same calculation for the stay-home case:

  • stay-home = P(weather=sunny|class=stay-home) * P(car=working|class=stay-home) * P(class=stay-home)
  • stay-home = 0.4 * 0.2 * 0.5
  • stay-home = 0.04

We can see that 0.32 is greater than 0.04, therefore we predict “go-out” for this instance, which is correct.

We can repeat this operation for the entire dataset, as follows:

If we tally up the predictions compared to the actual class values, we get an accuracy of 80%, which is excellent given that there are conflicting examples in the dataset.

Summary

In this post you discovered exactly how to implement Naive Bayes from scratch. You learned:

  • How to work with categorical data with Naive Bayes.
  • How to calculate class probabilities from training data.
  • How to calculate conditional probabilities from training data.
  • How to use a learned Naive Bayes model to make predictions on new data.

Do you have any questions about Naive Bayes or this post.
Ask your question by leaving a comment and I will do my best to answer it.


Frustrated With Machine Learning Math?

Mater Machine Learning Algorithms

See How Algorithms Work in Minutes

…with just arithmetic and simple examples

Discover how in my new Ebook: Master Machine Learning Algorithms

It covers explanations and examples of 10 top algorithms, like:
Linear Regression, k-Nearest Neighbors, Support Vector Machines and much more…

Finally, Pull Back the Curtain on
Machine Learning Algorithms

Skip the Academics. Just Results.

Click to learn more.


18 Responses to Naive Bayes Tutorial for Machine Learning

  1. Royi April 13, 2016 at 6:26 am #

    You really need to add MathJax to your blog for proper math.

    • Jason Brownlee April 13, 2016 at 7:27 am #

      You are probably right, but but not using crisp equations it might make them less intimidating to developers.

  2. Ben October 23, 2016 at 8:34 pm #

    Hi Jason, I’m very happy with how accessible your articles are to beginners! This one was very clear and simple, so thanks!

    I was wondering how this Naive Bayes classifier might be used to make *predictions*? For example, your original data set does not include the following combination:
    – weather=rainy
    – car=working.
    How might you use the model to predict the probability of each class given these two inputs?

    • Jason Brownlee October 24, 2016 at 7:04 am #

      Thanks Ben,

      The section in the post titled “Make Predictions with Naive Bayes” explains how to make a prediction given a new observation.

  3. eng adebayo March 31, 2017 at 2:36 am #

    Your codes are giving these errors:
    File “C:\Users\Eng Adebayo\Documents\pybrain-master\naive-bayes.py”, line 103, in
    main()
    File “C:\Users\Eng Adebayo\Documents\pybrain-master\naive-bayes.py”, line 93, in main
    dataset = loadCsv(filename)
    File “C:\Users\Eng Adebayo\Documents\pybrain-master\naive-bayes.py”, line 12, in loadCsv
    dataset[i] = [float(x) for x in dataset[i]]
    File “C:\Users\Eng Adebayo\Documents\pybrain-master\naive-bayes.py”, line 12, in
    dataset[i] = [float(x) for x in dataset[i]]
    ValueError: could not convert string to float:
    >>>

  4. Zaidan14 May 2, 2017 at 4:52 pm #

    Hello,
    I could not find the SVM in Algorithm Mind Map! To which category it belongs?
    Thanks a lot!

  5. Luky July 27, 2017 at 4:54 am #

    Hi it is great entry article especially for those who are not strong in math as me (on wiki it looks much more strange hard than here), however, may be i get it wrong, but are the conditioned probabilities correct?

    P(weather=sunny|class=go-out) = count(weather=sunny and class=go-out) / count(class=go-out)

    Let’s take another example. Let’s say that that input will be profession, and output will be introvert/extrovert. Let’s say I will ask 100 people what’s their profession and if they are introverted or extroverted.

    Now, let’s say i will meet only 3 IT workers, and all will be introverts. Rest 97 will be another professions and introverts too (just for case of simplicity).

    Now, if i will follow your example, then probaility that IT worker is introvert is:

    P(profession=IT|class=introvert) = count(profession=IT=introvert) / count(class=introvert)

    That is:

    3 / 100 = 3% of IT workers are introverts. But is it correct? We asked 3 IT people and all said yes. Should not it be:

    P(profession=IT|class=introvert) = count(profession=IT=introvert) / count(profession=IT) ?

    Now we will get 100% which reflects our sample. Or? Maybe with the weather sample it is similar?

    And it should be

    P(weather=sunny|class=go-out) = count(weather=sunny and class=go-out) / count(weather=sunny) ?

    Best, Luky

    • Luky July 27, 2017 at 5:30 am #

      Maybe both ratios have some meanings, but your ratio seems to be more vulnerable to structure of data. Let’s say i will have only 3 IT workers from 10k people, then ratio of IT introverts will be 3/10k introverts (let’s say) which will be very low number, while the second ratio (IT introverts / IT workers) isn’t vulnerable to unequal distributed data sample. (it says 100% even in case of 3 IT workers). But that’s just my opinion, i am curious about your view of thing :).

    • Jason Brownlee July 27, 2017 at 8:15 am #

      The math is right. But you are asking different questions.

      What is the probability of working in IT and being an introvert versus, what is the probability of being an introvert given that we know the person works in IT.

  6. Luky July 27, 2017 at 4:57 am #

    Correction: “3% of IT workers are introverts” = 3% chance that IT worker is introvert.

  7. Luky July 27, 2017 at 6:28 am #

    But now i see that when following the article ratio probabilities than in my case the extrovert probability would be 0, and that would be beaten by 4% introvert thus resulting in introvert and thats correct prediction.. so it has some sense.. 🙂 bah. math and logic is scary 😛

  8. Milind Mahajani August 18, 2017 at 8:11 pm #

    Great post — so well explained!

    Please give me a pointer to how I can predict a class when there are more than two input variables, such as “got a novel to finish” and “feeling tired”, in addition to the weather and car.

    How does one extend the method in such cases?

  9. Anebras Ahmed October 29, 2017 at 5:06 am #

    Hi
    I need to use Naive Bayes to detect and prevent SQL Injection
    what is the best model for this purpose?
    And lets the classifier return the request from client as SQL Injection attack what’s should be happen after that? am I redirect request to another page or make timeout in server of pass the request to the proxy ?
    thank’s ^_^

  10. Abubakar Bello July 5, 2018 at 12:40 am #

    Hi!
    Weldon Dr. Browwnlee. But still I need your assist to show me how I can calculate conditional probability of a training data like Pima Indian Diabetes Dataset?

Leave a Reply