How To Implement Naive Bayes From Scratch in Python

Last Updated on

The Naive Bayes algorithm is simple and effective and should be one of the first methods you try on a classification problem.

In this tutorial you are going to learn about the Naive Bayes algorithm including how it works and how to implement it from scratch in Python.

Discover how to code ML algorithms from scratch including kNN, decision trees, neural nets, ensembles and much more in my new book, with full Python code and no fancy libraries.

naive bayes classifier

Naive Bayes Classifier
Photo by Matt Buck, some rights reserved

About Naive Bayes

The Naive Bayes algorithm is an intuitive method that uses the probabilities of each attribute belonging to each class to make a prediction. It is the supervised learning approach you would come up with if you wanted to model a predictive modeling problem probabilistically.

Naive bayes simplifies the calculation of probabilities by assuming that the probability of each attribute belonging to a given class value is independent of all other attributes. This is a strong assumption but results in a fast and effective method.

The probability of a class value given a value of an attribute is called the conditional probability. By multiplying the conditional probabilities together for each attribute for a given class value, we have a probability of a data instance belonging to that class.

To make a prediction we can calculate probabilities of the instance belonging to each class and select the class value with the highest probability.

Naive bases is often described using categorical data because it is easy to describe and calculate using ratios. A more useful version of the algorithm for our purposes supports numeric attributes and assumes the values of each numerical attribute are normally distributed (fall somewhere on a bell curve). Again, this is a strong assumption, but still gives robust results.

Get your FREE Algorithms Mind Map

Machine Learning Algorithms Mind Map

Sample of the handy machine learning algorithms mind map.

I've created a handy mind map of 60+ algorithms organized by type.

Download it, print it and use it. 

Download For Free

Also get exclusive access to the machine learning algorithms email mini-course.



Predict the Onset of Diabetes

The test problem we will use in this tutorial is the Pima Indians Diabetes problem.

This problem is comprised of 768 observations of medical details for Pima indians patents. The records describe instantaneous measurements taken from the patient such as their age, the number of times pregnant and blood workup. All patients are women aged 21 or older. All attributes are numeric, and their units vary from attribute to attribute.

Each record has a class value that indicates whether the patient suffered an onset of diabetes within 5 years of when the measurements were taken (1) or not (0).

This is a standard dataset that has been studied a lot in machine learning literature. A good prediction accuracy is 70%-76%.

Below is a sample from the file to get a sense of the data we will be working with (update: download from here).

Naive Bayes Algorithm Tutorial

This tutorial is broken down into the following steps:

  1. Handle Data: Load the data from CSV file and split it into training and test datasets.
  2. Summarize Data: summarize the properties in the training dataset so that we can calculate probabilities and make predictions.
  3. Make a Prediction: Use the summaries of the dataset to generate a single prediction.
  4. Make Predictions: Generate predictions given a test dataset and a summarized training dataset.
  5. Evaluate Accuracy: Evaluate the accuracy of predictions made for a test dataset as the percentage correct out of all predictions made.
  6. Tie it Together: Use all of the code elements to present a complete and standalone implementation of the Naive Bayes algorithm.

1. Handle Data

The first thing we need to do is load our data file. The data is in CSV format without a header line or any quotes. We can open the file with the open function and read the data lines using the reader function in the csv module.

We also need to convert the attributes that were loaded as strings into numbers that we can work with them. Below is the loadCsv() function for loading the Pima indians dataset.

We can test this function by loading the pima indians dataset and printing the number of data instances that were loaded.

Running this test, you should see something like:

Next we need to split the data into a training dataset that Naive Bayes can use to make predictions and a test dataset that we can use to evaluate the accuracy of the model. We need to split the data set randomly into train and datasets with a ratio of 67% train and 33% test (this is a common ratio for testing an algorithm on a dataset).

Below is the splitDataset() function that will split a given dataset into a given split ratio.

We can test this out by defining a mock dataset with 5 instances, split it into training and testing datasets and print them out to see which data instances ended up where.

Running this test, you should see something like:

2. Summarize Data

The naive bayes model is comprised of a summary of the data in the training dataset. This summary is then used when making predictions.

The summary of the training data collected involves the mean and the standard deviation for each attribute, by class value. For example, if there are two class values and 7 numerical attributes, then we need a mean and standard deviation for each attribute (7) and class value (2) combination, that is 14 attribute summaries.

These are required when making predictions to calculate the probability of specific attribute values belonging to each class value.

We can break the preparation of this summary data down into the following sub-tasks:

  1. Separate Data By Class
  2. Calculate Mean
  3. Calculate Standard Deviation
  4. Summarize Dataset
  5. Summarize Attributes By Class

Separate Data By Class

The first task is to separate the training dataset instances by class value so that we can calculate statistics for each class. We can do that by creating a map of each class value to a list of instances that belong to that class and sort the entire dataset of instances into the appropriate lists.

The separateByClass() function below does just this.

You can see that the function assumes that the last attribute (-1) is the class value. The function returns a map of class values to lists of data instances.

We can test this function with some sample data, as follows:

Running this test, you should see something like:

Calculate Mean

We need to calculate the mean of each attribute for a class value. The mean is the central middle or central tendency of the data, and we will use it as the middle of our gaussian distribution when calculating probabilities.

We also need to calculate the standard deviation of each attribute for a class value. The standard deviation describes the variation of spread of the data, and we will use it to characterize the expected spread of each attribute in our Gaussian distribution when calculating probabilities.

The standard deviation is calculated as the square root of the variance. The variance is calculated as the average of the squared differences for each attribute value from the mean. Note we are using the N-1 method, which subtracts 1 from the number of attribute values when calculating the variance.

We can test this by taking the mean of the numbers from 1 to 5.

Running this test, you should see something like:

Summarize Dataset

Now we have the tools to summarize a dataset. For a given list of instances (for a class value) we can calculate the mean and the standard deviation for each attribute.

The zip function groups the values for each attribute across our data instances into their own lists so that we can compute the mean and standard deviation values for the attribute.

We can test this summarize() function with some test data that shows markedly different mean and standard deviation values for the first and second data attributes.

Running this test, you should see something like:

Summarize Attributes By Class

We can pull it all together by first separating our training dataset into instances grouped by class. Then calculate the summaries for each attribute.

We can test this summarizeByClass() function with a small test dataset.

Running this test, you should see something like:

3. Make Prediction

We are now ready to make predictions using the summaries prepared from our training data. Making predictions involves calculating the probability that a given data instance belongs to each class, then selecting the class with the largest probability as the prediction.

We can divide this part into the following tasks:

  1. Calculate Gaussian Probability Density Function
  2. Calculate Class Probabilities
  3. Make a Prediction
  4. Estimate Accuracy

Calculate Gaussian Probability Density Function

We can use a Gaussian function to estimate the probability of a given attribute value, given the known mean and standard deviation for the attribute estimated from the training data.

Given that the attribute summaries where prepared for each attribute and class value, the result is the conditional probability of a given attribute value given a class value.

See the references for the details of this equation for the Gaussian probability density function. In summary we are plugging our known details into the Gaussian (attribute value, mean and standard deviation) and reading off the likelihood that our attribute value belongs to the class.

In the calculateProbability() function we calculate the exponent first, then calculate the main division. This lets us fit the equation nicely on two lines.

We can test this with some sample data, as follows.

Running this test, you should see something like:

Calculate Class Probabilities

Now that we can calculate the probability of an attribute belonging to a class, we can combine the probabilities of all of the attribute values for a data instance and come up with a probability of the entire data instance belonging to the class.

We combine probabilities together by multiplying them. In the calculateClassProbabilities() below, the probability of a given data instance is calculated by multiplying together the attribute probabilities for each class. the result is a map of class values to probabilities.

We can test the calculateClassProbabilities() function.

Running this test, you should see something like:

Make a Prediction

Now that can calculate the probability of a data instance belonging to each class value, we can look for the largest probability and return the associated class.

The predict() function belong does just that.

We can test the predict() function as follows:

Running this test, you should see something like:

4. Make Predictions

Finally, we can estimate the accuracy of the model by making predictions for each data instance in our test dataset. The getPredictions() will do this and return a list of predictions for each test instance.

We can test the getPredictions() function.

Running this test, you should see something like:

5. Get Accuracy

The predictions can be compared to the class values in the test dataset and a classification accuracy can be calculated as an accuracy ratio between 0& and 100%. The getAccuracy() will calculate this accuracy ratio.

We can test the getAccuracy() function using the sample code below.

Running this test, you should see something like:

6. Tie it Together

Finally, we need to tie it all together.

Below provides the full code listing for Naive Bayes implemented from scratch in Python.

Running the example provides output like the following:

Implementation Extensions

This section provides you with ideas for extensions that you could apply and investigate with the Python code you have implemented as part of this tutorial.

You have implemented your own version of Gaussian Naive Bayes in python from scratch.

You can extend the implementation further.

  • Calculate Class Probabilities: Update the example to summarize the probabilities of a data instance belonging to each class as a ratio. This can be calculated as the probability of a data instance belonging to one class, divided by the sum of the probabilities of the data instance belonging to each class. For example an instance had the probability of 0.02 for class A and 0.001 for class B, the likelihood of the instance belonging to class A is (0.02/(0.02+0.001))*100 which is about 95.23%.
  • Log Probabilities: The conditional probabilities for each class given an attribute value are small. When they are multiplied together they result in very small values, which can lead to floating point underflow (numbers too small to represent in Python). A common fix for this is to combine the log of the probabilities together. Research and implement this improvement.
  • Nominal Attributes: Update the implementation to support nominal attributes. This is much similar and the summary information you can collect for each attribute is the ratio of category values for each class. Dive into the references for more information.
  • Different Density Function (bernoulli or multinomial): We have looked at Gaussian Naive Bayes, but you can also look at other distributions. Implement a different distribution such as multinomial, bernoulli or kernel naive bayes that make different assumptions about the distribution of attribute values and/or their relationship with the class value.

Resources and Further Reading

This section will provide some resources that you can use to learn more about the Naive Bayes algorithm in terms of both theory of how and why it works and practical concerns for implementing it in code.


More resources for learning about the problem of predicting the onset of diabetes.


This section links to open source implementations of Naive Bayes in popular machine learning libraries. Review these if you are considering implementing your own version of the method for operational use.


You may have one or more books on applied machine learning. This section highlights the sections or chapters in common applied books on machine learning that refer to Naive Bayes.

Next Step

Take action.

Follow the tutorial and implement Naive Bayes from scratch. Adapt the example to another problem. Follow the extensions and improve upon the implementation.

Leave a comment and share your experiences.

Update: Check out the follow-up on tips for using the naive bayes algorithm titled: “Better Naive Bayes: 12 Tips To Get The Most From The Naive Bayes Algorithm

Want to Code Algorithms in Python Without Math?

Machine Learning Algorithms From Scratch

Code Your First Algorithm in Minutes

…with step-by-step tutorials on real-world datasets

Discover how in my new Ebook:
Machine Learning Algorithms From Scratch

It covers 18 tutorials with all the code for 12 top algorithms, like:
Linear Regression, k-Nearest Neighbors, Stochastic Gradient Descent and much more…

Finally, Pull Back the Curtain on
Machine Learning Algorithms

Skip the Academics. Just Results.

Click to learn more.

247 Responses to How To Implement Naive Bayes From Scratch in Python

  1. david jensen December 12, 2014 at 3:28 am #

    Statistical methods should be developed from scratch because of misunderstandings. Thank you.

    • Mamta March 14, 2019 at 7:11 pm #

      Jason … Thank you so much… you are too good.

  2. Anurag December 14, 2014 at 1:11 pm #

    This is a wonderful article. Your blog is one of those blogs that I visit everyday. Thanks for sharing this stuff. I had a question about the programming language that should be used for building these algorithms from scratch. I know that Python is widely used because it’s easy to write code by importing useful libraries that are already available. Nevertheless, I am a C++ guy. Although I am a beginner in practical ML, I have tried to write efficient codes before I started learning and implementing ML. Now I am aware of the complexities involved in coding if you’re using C++: more coding is to be done than what is required in Python. Considering that, what language is your preference and under what situations? I know that it’s lame to ask about preferences of programming language as it is essentially a personal choice. But still I’d like you to share your take on this. Also try to share the trade-offs while choosing these programming languages.

    Thank you.

    • Jason Brownlee December 15, 2014 at 7:53 am #

      Thanks Anurag

      • SHRUTI April 17, 2018 at 12:33 am #

        I am getting error while I try to implement this in my own dataset.
        for classValue, classSummaries in summaries.iteritems():

        AttributeError: ‘list’ object has no attribute ‘iteritems’
        When I try to run it,with your csv file,it says
        ataset = list(lines)

        Error: iterator should return strings, not bytes (did you open the file in text mode?)
        What to do?

  3. Alcides Schulz January 15, 2015 at 12:32 am #

    Hi Jason, found your website and read it in one day. Thank you, it really helped me to understand ML and what to do.
    I did the 2 examples here and I think I will take a look at scikit-learn now.
    I have a personal project that I want to use ML, and I’ll keep you posted on the progress.

    One small note on this post, is on the “1. Handle data” you refer to from previous post.

    Thank you so much, example is really good to show how to do it. Please keep it coming.

    • Jason Brownlee January 15, 2015 at 7:43 am #

      Thanks for the kind words Alcides.

      Fixed the reference to the iris dataset.

      • vivek December 5, 2017 at 9:45 pm #

        hi jason im not able to run the code and get the Output it says “No such file or directory: ‘'”

        • Jason Brownlee December 6, 2017 at 9:02 am #

          Ensure you are running the code from the command line and that the data file and code file are in the same directory.

          • priya September 1, 2018 at 2:28 pm #

            is thetre any data file available

  4. toolate January 22, 2015 at 2:16 am #

    Hi Jason, still one more note on your post, is on the “1. Handle data” the flower measures that you refer to

  5. Tamilselvan February 4, 2015 at 11:37 pm #

    Great Article. Learned a Lot. Thanks. Thanks.

  6. Abhinav kumar February 23, 2015 at 8:13 pm #

    thank u

  7. Roy March 7, 2015 at 2:53 pm #

    Thanks for your nice article. I really appreciate the step by step instructions.

  8. malini March 17, 2015 at 7:19 pm #

    hello sir, plz tell me how to compare the data set using naive Bayes algorithm.

  9. Isha March 21, 2015 at 5:40 pm #

    Why does the accuracy change every time you run this code?
    when i tried running this code every time it gave me different accuracy percentage in the range from 70-78%
    Why is it so?
    Why is it not giving a constant accuracy percent?

    • Harry April 9, 2015 at 8:37 am #

      As Splitting of dataset into testdata and traindata is done using a random function accuracy varies.

      • Nitin Ramesh October 27, 2017 at 4:00 pm #

        I’m extremely new to this concept so please help me with this query.

        The algorithm splits the dataset into the same top 67% and bottom 33% every single time.
        The test-set is same on every single run.

        So even though we use a random function on the top 67%(training set) to randomly index them.
        A calculation like
        ((4+2)/6) and ((2+4)/6) will yield same result every-time.How is this yielding different result?

        Is this something to do with the order of calculation in the Gaussian probability density function?

        • Abdul Salam December 22, 2017 at 10:43 pm #

          well.. the math calculations would come under to the view if you had the same sample taken again and again..

          but the point here is.. we are taking the random data itself.. like.. the variables will not be same in every row right.. so if u change the rows randomly… at the end the accuracy will change… taken that into consideration.. you can take average of accuracy for each run to get what is the “UN-fluctuated accuracy” just in case if you want…

          hope it helps..

  10. Sheepsy90 March 25, 2015 at 8:12 pm #

    Hey nice article – one question – why do you use the N-1 in the STD Deviation Process?

  11. Vaishali April 8, 2015 at 6:01 pm #

    Hey! Thanks a ton! This was very useful.
    It would be great if you give an idea on how other metrics like precision and recall can be calculated.


  12. Ashwin Perti April 24, 2015 at 5:28 pm #


    When I am running the same code in IDLE (python 2.7) – the code is working fine, but when I run the same code in eclipse. the error coming is:

    1) warning – unused variable dataset
    2) undefined variable dataset in for loop

    Why this difference.

  13. Melvin Tjon Akon May 21, 2015 at 1:46 am #

    Great post, Jason.
    For a MBA/LLM, it makes naive bayes very easy to understand and to implement in legal coding. Looking forward to read more. Best, Melvin

  14. Igor Balabine June 10, 2015 at 11:44 am #


    Great example. Thanks! One nit: “calculateProbability” is not a good name for a function which actually calculates Gaussian probability density – pdf value may be greater than 1.



    • - Ruud - November 26, 2016 at 2:23 am #

      Good point, thanks!

  15. Alex Ubot July 2, 2015 at 10:06 pm #

    Hi Jason,

    Fantastic post. I really learnt a lot. However I do have a question? Why don’ t you use the P(y) value in your calculateClassProbabilities() ?
    If I understood the model correctly, everything is based on the bayes theorem :
    P(y|x1….xn) = P(x1…..xn|y) * P(y) / P(x1……xn)
    P(x1……xn) will be a constant so we can get rid of it.
    Your post explain very well how to calculate P(x1……xn|y) (assumption made that x1…..xn are all independent we then have
    P(x1……xn|y) = P(x1|y) * …. P(xn|y) )
    How about p(y) ? I assume that we should calculate the frequency of the observation y in the training set and then multiply it to probabilities[classValue] so that we have :
    P(y|x1…..xn) = frequency(classValue) * probabilities[classValue]

    Otherwise let’ s assume that in a training set of 500 lines, we have two class 0 and 1 but observed 100 times 0 et 400 times 1. If we do not compute the frequency, then the probability may be biased, right ? Did I misunderstand something ? Hopefully my post is clear. I really hope that you will reply because I am a bit confused.


    • Babu February 28, 2016 at 7:43 am #

      I have the same question – why is multiplying by p(y) is omitted?

      • Babu March 10, 2016 at 2:09 pm #

        No Answer yet – no one on internet has answer to this.

        Just don’t want to accept answers without understanding it.

        • frong April 3, 2016 at 3:15 pm #

          yeah,I have the same question too, maybe the P(y) is nessary ,but why the accuracy is not so low when P(y) is missing? is it proving that bayes model is powerful?

          • gd April 7, 2016 at 2:27 am #


            I believe this is because P(y) = 1 as classes are already segregated before calculating P(x1…xn|Y).

            Can experts comment on this please?

          • Babu May 23, 2016 at 7:32 am #

            There is huge bug in this implementation;

            First of all the implementation using GaussianNB gives totally a different answer.
            Why is no one is replying even after 2 months of this.

            My concern is, there are so many more bad bayesians in a wrong concept.
            My lead read this article and now he thinks I am wrong

            At least the parameters are correct – something wrong with calculating probs.

            def SplitXy(Xy):
            Xy10 = Xy;
            #print Xy10
            #print “========”
            y= zXy10[-1]
            del zXy10[-1]
            X=[list(t) for t in z1]
            return X,y

            from sklearn.naive_bayes import GaussianNB
            X,y = SplitXy(trainingSet)
            Xt,yt = SplitXy(testSet)

            model = GaussianNB()
  , y)

            ### Compare the models built by Python

            print (“Class: 0”)
            for i,j in enumerate(model.theta_[0]):
            print (“({:8.2f} {:9.2f} {:7.2f} )”.format(j, model.sigma_[0][i], sqrt(model.sigma_[0][i])) , end=””)
            print (“==> “, summaries[0][i])

            print (“Class: 1”)
            for i,j in enumerate(model.theta_[1]):
            print (“({:8.2f} {:9.2f} {:7.2f} )”.format(j, model.sigma_[1][i], sqrt(model.sigma_[1][i])) , end=””)
            print (“==> “, summaries[1][i])

            Class: 0
            ( 3.18 9.06 3.01 )==> (3.1766467065868262, 3.0147673799630748)
            ( 109.12 699.16 26.44 )==> (109.11976047904191, 26.481293163857107)
            ( 68.71 286.46 16.93 )==> (68.712574850299404, 16.950414098038465)
            ( 19.74 228.74 15.12 )==> (19.742514970059879, 15.146913806453629)
            ( 68.64 10763.69 103.75 )==> (68.640718562874255, 103.90387227315443)
            ( 30.71 58.05 7.62 )==> (30.710778443113771, 7.630215185470916)
            ( 0.42 0.09 0.29 )==> (0.42285928143712581, 0.29409299864249266)
            ( 30.66 118.36 10.88 )==> (30.658682634730539, 10.895778423248444)
            Class: 1
            ( 4.76 12.44 3.53 )==> (4.7611111111111111, 3.5365037952376928)
            ( 139.17 1064.54 32.63 )==> (139.17222222222222, 32.71833930500929)
            ( 69.27 525.24 22.92 )==> (69.272222222222226, 22.98209907114023)
            ( 22.64 309.59 17.60 )==> (22.638888888888889, 17.644143437447358)
            ( 101.13 20409.91 142.86 )==> (101.12777777777778, 143.2617649699204)
            ( 34.99 57.18 7.56 )==> (34.99388888888889, 7.5825893182809425)
            ( 0.54 0.14 0.37 )==> (0.53544444444444439, 0.3702077209795522)
            ( 36.73 112.86 10.62 )==> (36.727777777777774, 10.653417924304598)

    • EL YAMANI May 22, 2016 at 8:57 am #


      Thanks for this article , it is very helpful . I just have a remark about the probabilty that you are calculating which is P(x|Ck) and then you make predictions, the result will be biased since you don’t multiply by P(Ck) , P(x) can be omitted since it’s only a normalisation constant.

  16. Anand July 20, 2015 at 9:12 pm #

    Thanks a lot for this tutorial, Jason.

    I have a quick question if you can help.

    In the separateByClass() definition, I could not understand how vector[-1] is a right usage when vector is an int type object.

    If I try the same commands one by one outside the function, the line of code with vector[-1] obviously throws a TypeError: 'int' object has no attribute '__getitem__'.

    Then how is it working inside the function?

    I am sorry for my ignorance. I am new to python. Thank you.

  17. Sarah August 26, 2015 at 5:50 pm #

    Hello Jason! I just wanted to leave a message to say thank you for the website. I am preparing for a job in this field and it has helped me so much. Keep up the amazing work!! 🙂

    • Jason Brownlee August 26, 2015 at 6:56 pm #

      You’re welcome! Thanks for leaving such a kind comment, you really made my day 🙂

  18. Jaime Lopez September 7, 2015 at 8:52 am #

    Hi Jason,

    Very easy to follow your classifier. I try it and works well on your data, but is important to note that it works just on numerical databases, so maybe one have to transform your data from categorical to numerical format.

    Another thing, when I transformed one database, sometimes the algorithm find division by zero error, although I avoided to use that number on features and classes.

    Any suggestion Jason?

    Thanks, Jaime

    • syed belgam April 11, 2016 at 2:05 pm #


  19. eduardo September 28, 2015 at 1:32 pm #

    It is by far the best material I’ve found , please continue helping the community!

  20. Thibroll September 29, 2015 at 9:11 pm #


    This is all well explained, and depicts well the steps of machine learning. But the way you calculate your P(y|X) here is false, and may lead to unwanted error.

    Here, in theory, using the Bayes law, we know that : P(y|X) = P(y).P(X|y)/P(X). As we want to maximize P(y|X) with a given X, we can ignore P(X) and pick the result for the maximized value of P(y).P(X|y)

    2 points remain inconsistent :
    – First, you pick a gaussian distribution to estimate P(X|y). But here, you calculateProbability calculates the DENSITY of the function to the specific points X, y, with associated mean and deviation, and not the actual probability.
    – The second point is that you don’t take into consideration the calculation of P(y) to estimate P(y|X). Your model (with the correct probability calculation) may work only if all samples have same amount in every value of y (considering y is discret), or if you are lucky enough.

    Anyway, despite those mathematical issue, this is a good work, and a god introduction to machine learning.

  21. mondet October 6, 2015 at 10:08 am #

    Thanks Jason for all this great material. One thing that i adore from you is the intellectual honesty, the spirit of collaboration and the parsimony.

    In my opinion you are one of the best didactics exponents in the ML.

    Thanks to Thibroll too. But i would like to have a real example of the problem in R, python or any other language.



  22. Erika October 15, 2015 at 10:03 am #

    Hi Jason,
    I have trying to get started with machine learning and your article has given me the much needed first push towards that. Thank you for your efforts! 🙂

  23. Swagath November 9, 2015 at 5:35 pm #

    i need this code in java.. please help me//

  24. Sarah November 16, 2015 at 11:54 pm #

    I am working with this code – tweaking it here or there – have found it very helpful as I implement a NB from scratch. I am trying to take the next step and add in categorical data. Any suggestions on where I can head to get ideas for how to add this? Or any particular functions/methods in Python you can recommend? I’ve brought in all the attributes and split them into two datasets for continuous vs. categorical so that I can work on them separately before bringing their probabilities back together. I’ve got the categorical in the same dictionary where the key is the class and the values are lists of attributes for each instance. I’m not sure how to go through the values to count frequencies and then how to store this back up so that I have the attribute values along with their frequencies/probabilities. A dictionary within a dictionary? Should I be going in another direction and not using a similar format?

  25. Emmanuel Nuakoh November 19, 2015 at 6:36 am #

    Thank you Jason, this tutorial is helping me with my implementation of NB algorithm for my PhD Dissertation. Very elaborate.

  26. Anna January 14, 2016 at 2:32 am #

    Hi! thank you! Have you tried to do the same for the textual datasets, for example 20Newsgroups ? Would appreciate some hints or ideas )

  27. Randy January 16, 2016 at 4:15 pm #

    Great article, but as others pointed out there are some mathematical mistakes like using the probability density function for single value probabilities.

  28. Meghna February 7, 2016 at 7:45 pm #

    Thank you for this amazing article!! I implemented the same for wine and MNIST data set and these tutorials helped me so much!! 🙂

  29. David February 7, 2016 at 11:17 pm #

    I got an error with the first print statement, because your parenthesis are closing the call to print (which returns None) before you’re calling format, so instead of

    print(‘Split {0} rows into train with {1} and test with {2}’).format(len(dataset), train, test)

    it should be

    print(‘Split {0} rows into train with {1} and test with {2}’.format(len(dataset), train, test))

    Anyway, thanks for this tutorial, it was really useful, cheers!

  30. Kumar Ramanathan February 12, 2016 at 12:20 pm #

    Sincere gratitude for this most excellent site. Yes, I never learn until I write code for the algorithm. It is such an important exercise, to get concepts embedded into one’s brain. Brilliant effort, truly !

  31. Syed February 18, 2016 at 8:15 am #

    Just to test the algorithm, i change the class of few of the data to something else i.e 3 or 4, (last digit in a line) and i get divide by zero error while calculating the variance. I am not sure why. does it mean that this particular program works only for 2 classess? cant see anything which restricts it to that.

  32. Takuma Udagawa March 20, 2016 at 1:19 pm #

    Hi, I’m a student in Japan.
    It seems to me that you are calculating p(X1|Ck)*p(X2|Ck)*…*p(Xm|Ck) and choosing Ck such that this value would be maximum.
    However, when I looked in the Wikipedia, you are supposed to calculate p(X1|Ck)*p(X2|Ck)*…*p(Xm|Ck)*p(Ck).
    I don’t understand when you calculated p(Ck).
    Would you tell me about it?

  33. Babu May 23, 2016 at 7:36 am #

    This is the same question as Alex Ubot above.

    Calculating the parameters are correct.
    but prediction implementation is incorrect.

    Unfortunately this article comes up high and everyone is learning incorrect way of doing things I think

  34. Swapnil June 10, 2016 at 1:21 am #

    Really nice tutorial. Can you post a detailed implementation of RandomForest as well ? It will be very helpful for us if you do so.


  35. sourena maroofi July 22, 2016 at 12:24 am #

    thanks Jason…very nice tutorial.

  36. Gary July 27, 2016 at 5:44 pm #

    I was interested in this Naive Bayes example and downloaded the .csv data and the code to process it.

    However, when I try to run it in Pycharm IDE using Python 3.5 I get no end of run-time errors.

    Has anyone else run the code successfully? And if so, what IDE/environment did they use?



    • Sudarshan August 10, 2016 at 5:05 pm #

      Hi Gary,

      You might want to run it using Python 2.7.

  37. Sudarshan August 10, 2016 at 5:02 pm #


    Thanks for the excellent tutorial. I’ve attempted to implement the same in Go.

    Here is a link for anyone that’s interested interested.

  38. Atlas August 13, 2016 at 6:40 am #

    This is AWESOME!!! Thank you Jason.

    Where can I find more of this?

  39. Alex August 20, 2016 at 4:34 pm #

    That can be implemented in any language because there’re no special libraries involved.

  40. SAFA August 28, 2016 at 1:39 am #

    there is some errors in “def splitDataset”
    in machine learning algorithm , split a dataset into trainning and testing must be done without repetition (duplication) , so the index = random.randrange(len(copy)) generate duplicate data
    for example ” index = 0 192 1 2 0 14 34 56 1 ………
    the spliting method must be done without duplication of data.

  41. Krati Jain September 12, 2016 at 2:35 pm #

    This is a highly informative and detailed explained article. Although I think that this is suitable for Python 2.x versions for 3.x, we don’t have ‘iteritems’ function in a dict object, we currently have ‘items’ in dict object. Secondly, format function is called on lot of print functions, which should have been on string in the print funciton but it has been somehow called on print function, which throws an error, can you please look into it.

  42. upen September 16, 2016 at 5:01 pm #

    hey Jason
    thanks for such a great tutorial im newbie to the concept and want to try naive bayes approach on movie-review on the review of a single movie that i have collected in a text file
    can you please provide some hint on the topic how to load my file and perform positve or negative review on it

  43. Abhis September 20, 2016 at 3:00 am #

    Would you please help me how i can implement naive bayes to predict student performance using their marks and feedback

    • Jason Brownlee September 20, 2016 at 8:35 am #

      I’m sorry, I am happy to answer your questions, but I cannot help you with your project. I just don’t have the capacity.

  44. Vinay October 13, 2016 at 2:18 pm #

    Hey Jason,

    Thanks a lot for such a nice article, helped a lot in understanding the implementations,

    i have a problem while running the script.
    I get the below error

    if (vector[-1] not in separated):
    IndexError: list index out of range

    can you please help me in getting it right?

    • Jason Brownlee October 14, 2016 at 8:58 am #

      Thanks Vinay.

      Check that the data was loaded successfully. Perhaps there are empty lines or columns in your loaded data?

  45. Viji October 20, 2016 at 8:57 pm #

    Hi Jason,

    Thank you for the wonderful article. U have used the ‘?'(testSet = [[1.1, ‘?’], [19.1, ‘?’]]) in the test set. can u please tell me what it specifies

  46. jeni November 15, 2016 at 9:11 pm #

    please send me a code in text classification using naive bayes classifier in python . the data set classifies +ve,-ve or neutral

    • Jason Brownlee November 16, 2016 at 9:28 am #

      Hi jeni, sorry I don’t have such an example prepared.

  47. MLNewbie November 28, 2016 at 1:21 pm #

    I am a newbie to ML and I found your website today. It is one of the greatest ML resources available on the Internet. I bookmarked it and thanks for everything Jason and I will visit your website everyday going forward.

    • Jason Brownlee November 29, 2016 at 8:47 am #

      Thanks, I’m glad you like it.

      • Anne January 7, 2017 at 6:58 pm #

        def predict(summaries, inputVector):
        probabilities = calculateClassProbabilities(summaries, inputVector)
        bestLabel, bestProb = None, -1
        for classValue, probability in probabilities.iteritems():
        if bestLabel is None or probability > bestProb:
        bestProb = probability
        bestLabel = classValue
        return bestLabel

        why is the prediction different for these
        summaries = {‘A’ : [(1, 0.5)], ‘B’: [(20, 5.0)]} –predicts A
        summaries = {‘0’ : [(1, 0.5)], ‘1’: [(20, 5.0)]} — predicts 0
        summaries = {0 : [(1, 0.5)], 1: [(20, 5.0)]} — predicts 1

  48. ML704 January 18, 2017 at 6:16 pm #

    Hi, can someone please explain the code snippet below:

    def separateByClass(dataset):
    separated = {}
    for i in range(len(dataset)):
    vector = dataset[i]
    if (vector[-1] not in separated):
    separated[vector[-1]] = []
    return separated

    What do curly brackets mean in separated = {}?
    vector[-1] ?

    Massive thanks!

  49. S February 27, 2017 at 8:47 am #

    I am trying to create an Android app which works as follows:

    1) On opening the App, the user types a data in a textbox & clicks on search

    2) The app then searches about the entered data via internet and returns some answer (Using machine learning algorithms)

    I have a dataset of around 17000 things.

    Can you suggest the approach? Python/Java/etc…? Which technology to use for implementing machine learning algorithm & for connecting to dataset? How to include the dataset so that android app size is not increased?

    Basically, I am trying to implement an app described in a research paper.

    I can implement machine learning(ML) algorithms in Python on my laptop for simple ML examples. But, I want to develop an Android app in which the data entered by user is checked from a web site and then from a “data set (using ML)” and result is displayed in app based on both the comparisons. The problem is that the data is of 40 MB & how to reflect the ML results from laptop to android app?? By the way, the dataset is also available online. Shall I need a server? Or, can I use localhost using WAMP server?

    Which python server should I use? I would also need to check the data entered from a live website. Can I connect my Android app to live server and localhost simultaneously? Is such a scenario obvious for my app? What do you suggest? Is Anaconda software sufficient?

    • Jason Brownlee February 28, 2017 at 8:08 am #

      Sorry I cannot make any good suggestions, I think you need to talk to some app engineers, not ML people.

  50. Roy March 1, 2017 at 4:20 am #

    Hello Jason,

    lines = csv.reader(open(filename, “rb”))
    IOError: [Errno 2] No such file or directory: ‘’

    I have the csv file downloaded and its in the same folder as my code.

    What should I do about this?

    • Jason Brownlee March 1, 2017 at 8:44 am #

      Hi Roy,

      Confirm that the file name in your directory exactly matches the expectation of the script.

      Confirm you are running from the command line and both the script and the data file are in the same directory and you are running from that directory.

      If using Python 3, consider changing the ‘rb’ to ‘rt’ (text instead of binary).

      Does that help?

  51. Jordan March 1, 2017 at 8:40 pm #

    Hi Jason. Great Tutorial! But, why did you leave the P(Y) in calculateClassProbability() ? The prediction produces in my machine is fine… But some people up there have mentioned it too that what you actually calculate is probability density function. And you don’t even answer their question.

  52. Ali March 7, 2017 at 6:21 am #

    Hi Jason,
    Can you please help me fixing below error, The split is working but accuracy giving error

    Split 769 rows into train=515 and test=254 rows
    Traceback (most recent call last):
    File “”, line 100, in
    File “”, line 95, in main
    summaries = summarizeByClass(trainingSet)
    File “”, line 45, in summarizeByClass
    separated = separateByClass(dataset)
    File “”, line 26, in separateByClass
    if (vector[-1] not in separated):
    IndexError: list index out of range

  53. shankru Guggari March 13, 2017 at 9:52 pm #

    Class wise selection of training and testing data
    For Example
    In Iris Dataset : Species Column we have classes called Setosa, versicolor and virginica

    I want to select 80% of data from each class values.

    Advance thanks

    • Jason Brownlee March 14, 2017 at 8:17 am #

      You can take a random sample or use a stratified sample to ensure the same mixture of classes in train and test sets.

  54. Namrata March 19, 2017 at 5:33 pm #

    error in Naive Bayes code
    IndexError:list index out of range

  55. velu March 28, 2017 at 4:25 pm #

    hai guys
    i am velmurugan iam studying annauniversity tindivanam
    i have a code for summarization for english description in java

  56. Kamal March 29, 2017 at 10:24 am #

    Hi Jason,

    This example works. really good for Naive Bayes, but I was wondering what the approach would be like for joint probability distributions. Given a dataset, how to construct a bayesian network in Python or R?

  57. Joelon johnson March 30, 2017 at 7:20 pm #

    Hello Jason,

    Joelon here. I am new to python and machine learning. I keep getting a run-time error after compiling the above script. Is it possible I send you screenshots of the error so we walk through step by step?

  58. Bill April 2, 2017 at 10:52 pm #

    Hello Jason !

    Thank you for this tutorial.
    I have a question: what if our x to predict is a vector? How can we calculate the probability to be in a class (in the function calculateProbability for example) ?

    Thank you

    • Jason Brownlee April 4, 2017 at 9:07 am #

      Not sure I understand Bill. Perhaps you can give an example?

  59. Asmita Singh April 9, 2017 at 12:33 pm #

    HI Jason,
    Thanks for such a wonderful article. Your efforts are priceless. One quick question about handling cases with single value probabilities. Which part of code requires any smoothening.

    • Jason Brownlee April 9, 2017 at 3:01 pm #

      Sorry, I’m not sure I understand your question. Perhaps you can restate it?

  60. Mohammed Ehteramuddin April 11, 2017 at 12:51 am #

    Hello Jason,
    First of all I thank you very much for such a nice tutorial.

    I have a quick question for you if you could find some of your precious time to answer it.

    Question: Why is that summarizeByClass(dataset) works only with a particular pattern of the dataset like the dataset in your example and does not work with the different pattern like my example dataset = [[2,3,1], [9,7,3], [12,9,0],[29,0,0]]

    I guess it has to work for all the different possible datasets.

    Mohammed Ehteramuddin

    • Jason Brownlee April 11, 2017 at 9:33 am #

      It should work with any dataset as long as the last variable is the class variable.

      • Mohammed Ehteramuddin April 12, 2017 at 7:49 pm #

        Oh! you mean the last variable of the dataset (input) can not be other that the two values that we desire to classify the data into, in our case it should either be 0 or 1.

        Thanks you very much.

  61. Salihins Gund April 19, 2017 at 6:36 am #

    What is correct variable refers in getAccuracy function? Can you elaborate it more?

    • Salihins Gund April 19, 2017 at 7:04 am #

      Sorry that was wrong question.

      Ideally the Gaussian Naive Bayes has lambda (threshold) value to set boundary. I was wondering which part of this code include the threshold?

  62. way win April 21, 2017 at 3:00 am #

    Can you provide an extension to the data. I can not downloand it for some reason. Thank you!

  63. Mian Saeed Akbar May 9, 2017 at 10:55 am #

    Hi Jason…!
    Thank You so much for coding the problem in a very clear way. As I am new in machine learning and also I have not used Python before therefore I feel some difficulties in modifying it. I want to include the serial number of in data set and then show which testing example (e.g. example # 110) got how much probability form class 0 and for 1.

  64. Blazej May 27, 2017 at 9:33 pm #

    Hi Jason,

    I encountered a problem at the beginning. After loading the file and running this test:

    filename = ‘’
    dataset = loadCsv(filename)
    print(‘Loaded data file {0} with {1} rows’).format(filename, len(dataset))

    I get the Error: “iterator should return strings, not bytes (did you open the file in text mode?)”

    btw i am using python 3.6

    thank you for the help

    • Jason Brownlee June 2, 2017 at 12:02 pm #

      Change the loading of the file from binary to text (e.g. ‘rt’)

  65. Marcus June 10, 2017 at 9:13 am #

    This code is not working with Python 3.16 :S

  66. Marcus June 10, 2017 at 9:14 am #


    • Jason Brownlee June 11, 2017 at 8:19 am #

      Thanks Marcus.

      • Marcus June 13, 2017 at 10:28 am #

        Is there a way you can try to fix and make it work with 3.6 maybe Jason?

        • Marcus June 15, 2017 at 2:23 am #

          Jason can you explain what this code does?

        • Darmawan Utomo December 13, 2017 at 9:16 pm #

          I run the code in python 3.6.3 and here are the corrections:
          1. change the “rb” to “rt”

          2. print(‘Split {0} rows into train={1} and test={2} rows’.format(len(dataset), len(trainingSet), len(testSet)))

          3. change for each .iteritems() to .items()

          4. print(‘Accuracy: {0}%’.format(accuracy))

          Here are some of the results:

          Split 768 rows into train=514 and test=254 rows
          Accuracy: 71.25984251968504%

          Split 768 rows into train=514 and test=254 rows
          Accuracy: 76.77165354330708%

  67. Guy Person June 14, 2017 at 4:34 am #

    The code in this tutorial is riddled with error after error… The string output formatting isn’t even done right for gods sakes!

    • Person Guy June 15, 2017 at 7:56 am #

      This was written in 2014, the python documentation has changed drastically as the versions have been updated

  68. J Wang June 15, 2017 at 8:02 am #

    Hey Jason, I really enjoyed the read as it was very thorough and even for a beginner programmer like myself understandable overall. However, like Marcus asked, is it at all possible for you to rewrite or point out how to edit the parts that have new syntax in python 3?

    Also, this version utilized the Gaussian probability density function, how would we use the other ones, would the math be different or the code?

  69. Giselle July 12, 2017 at 10:53 am #

    Hi Jason, thank you for this post it’s super informative, I just started college and this is really easy to follow! I was wondering how this could be done with a mixture of binary and categorical data. For example, if I wanted to create a model to determine whether or not a car would break down and one category had a list of names of 10 car parts while another category simply asked if the car overheated (yes or no). Thanks again!

  70. Thomas July 14, 2017 at 7:53 am #

    Hi! Thanks for this helpful article. I had a quick question: in your calculateProbability() function, should the denominator be multiplied by the variance instead of the standard deviation?

    i.e. should

    return (1 / (math.sqrt(2 * math.pi) * stdev)) * exponent

    instead be:

    return (1 / (math.sqrt(2 * math.pi) * math.pow(stdev, 2))) * exponent

  71. Rezaul Karim August 6, 2017 at 5:20 pm #

    Hi Jason. I visited ur site several times. This is really helpful. I look for SVM implementation in python from scratch like the way you implemented Naive Bayes here. Can u provide me SVM code??

  72. Charmaine Ponay August 17, 2017 at 10:06 pm #

    Thank you so much Mr. Brownlee. I would like to ask your permission if I can show my students your implementations? They are very easy to understand and follow. Thank you very much again 🙂

    • Jason Brownlee August 18, 2017 at 6:20 am #

      No problem as long as you credit the source and link back to my website.

      • Charmaine Ponay August 22, 2017 at 3:19 pm #

        thank you very much 🙂

  73. THAMILSELVAM B September 9, 2017 at 2:30 am #

    Very good basic tutorial. Thank you.

  74. Chandar September 25, 2017 at 10:48 pm #

    Hi Jason,
    I would have exported my model using joblib and as I have converted the categorical data to numeric in the training data-set to develop the model and now I have no clue on how to convert the a new categorical data to predict using the trained model.

    • Jason Brownlee September 26, 2017 at 5:38 am #

      New data must be prepared using the same methods as data used to train the model.

      Sometimes this might mean keeping objects/coefficients used to scale or encode input data along with the model.

  75. Narendra December 4, 2017 at 3:46 am #

    where/what is learning in this code. I think it is just naive bayes classification. Please specify the learning.

    • Jason Brownlee December 4, 2017 at 7:59 am #

      Good question, the probabilities are “learned” from data.

      • ravi September 16, 2019 at 5:42 pm #

        such a beutiful artical

  76. Christian December 18, 2017 at 1:01 am #

    Great example. You are doing a great work thanks. Please am working on this example but i am confused on how to determine attribute relevance analysis. That is how do i determine which attribute is (will) be relevant for my model.

    • Jason Brownlee December 18, 2017 at 5:26 am #

      Perhaps you could look at the independent probabilities for each variable?

      • Christian December 18, 2017 at 3:00 pm #

        thanks very much. Grateful

  77. SUIMEE December 25, 2017 at 10:05 pm #

    Can you please send me the code for pedestrian detection using HOG and NAIVE BAYES?

  78. Jasper January 25, 2018 at 1:14 pm #

    what does this block of code do

    while len(trainSet) < trainSize:
    index = random.randrange(len(copy))
    return [trainSet, copy]

    • Jason Brownlee January 26, 2018 at 5:37 am #

      Selects random rows from the dataset copy and adds them to the training set.

  79. Scott January 27, 2018 at 9:54 am #


    I am very happy with this implementation! I used it as inspiration for an R counterpart. I am unclear about one thing. I understand the training set mean and sd are parameters used to evaluate the test set, but I don’t know why that works lol.

    How does evaluating the GPDF with the training set data and the test set instance attributes “train” the model? I may be confusing myself by interpreting “train” too literally.

    I think of train as repetitiously doing something multiple times with improvement from each iteration, and these iterations ultimately produce some catalyst for higher predictions. It seems that there is only one iteration of defining the training set’s mean and sd. Not sure if this question makes sense and I apologize if that is the case.

    Any help is truly, genuinely appreciated!

    Scott Bishop

    • Jason Brownlee January 28, 2018 at 8:20 am #

      Here, the training data provides the basis for estimating the probabilities used by the model.

  80. som February 5, 2018 at 4:14 am #

    Hi Jason,
    Sometimes I am getting the “ZeroDivisionError: float division by zero” when I am running the program

  81. shadhana February 23, 2018 at 4:12 am #

    is there a way to get the elements that fall under each class after the classification is done

    • Jason Brownlee February 23, 2018 at 12:03 pm #

      Do you mean a confusion matrix:

    • Christian Post March 3, 2018 at 12:42 am #

      I added this function:

      There is probably a more elegant way to write this code, but I’m new to Python 🙂

      The returning array lets you calculate all sorts of criteria, such as sensitivity, specifity, predictive value, likelihood ratio etc.

      • Christian Post March 9, 2018 at 2:01 am #

        Whoops I realized I mixed something up. FP and FN have to be the other way round since the outer if-clause checks the true condition. I hope no one has copied that and got in trouble…
        Anyways, looks like the confusion matrix lives up to its name.

  82. Christian Post March 3, 2018 at 12:48 am #

    Hello Jason,

    first of all, thanks for this blog. I learned a lot both on python (which I am pretty new to) and also this specific classifier.

    I tested this algorithm on a sample database with cholesterol, age and heart desease, and got better results than with a logistic regression.
    However, since age is clearly not normally distributed, I am not sure if this result is even legit.
    Could you explain how I can change the calculateProbability function to a different distribution?

    Oh, and also: How can I add code tags to my replies so that it becomes more readable?

  83. Nil March 7, 2018 at 8:10 pm #

    Hi DR. Jason,

    It is a very good post.
    I did not see the K Fold Cross Validation in this post like I saw in your post of Neural Network from scratch. Does it mean that Naive Bayes does not need K Fold Cross Validation? Or does not work with K Fold CV?
    It is because I am trying to use K Fold CV with Naive Bayes from scratch but I find it difficult since we need to split data by class to make some calculations, we find there two datasets if we have two class classification dataset (but we have one K Fold CV function).
    I am facing serious difficulties to understand K Fold CV it seams that the implementation from scratch depends on the classifier we are using.

    If you have some answer or tips to this approach (validation on Naive Bayes with K Fold CV – from scratch – ) please let me know.


    • Jason Brownlee March 8, 2018 at 6:22 am #

      It is a good idea to use CV to evaluate algorithms including naive bayes.

  84. y April 23, 2018 at 11:34 pm #

    If the variance is equal to 0, how to deal with?

    • Jason Brownlee April 24, 2018 at 6:34 am #

      If the variance is 0, then all data in your sample has the same value.

      • y April 25, 2018 at 5:43 pm #

        if ‘stdev’ is 0 , how to deal with it?

        • Jason Brownlee April 26, 2018 at 6:23 am #

          If stdev is 0 it means all values are the same and that feature should be removed.

  85. kk May 29, 2018 at 4:40 pm #

    How can I retrain my model, without training it from scratch again?
    Like if I got some new label for instances, how to approach it then?

    • Jason Brownlee May 30, 2018 at 6:33 am #

      Many models can be updated with new data.

      With naive bayes, you can keep track of the frequency and likelihood of obs for discrete values or the PDF/mass function for continuous values and update them with new data.

  86. Manoj June 7, 2018 at 7:08 pm #

    Very good Program, its working correctly, how to construct the bayesian network taking this pima diabetes csv file.

  87. Yashvardhan June 28, 2018 at 5:19 pm #

    Hey can I ask why didn’t you use a MLE estimate for the prior?

  88. Vinayak Tyagi July 9, 2018 at 6:37 pm #

    It’s a Navie-bayes-classifiation but insted of using Navies-bayes therom we are using Gaussian Probability Density Function Why ?????

  89. Prajna p n July 11, 2018 at 10:04 pm #

    why input vector is [1.1,”?”] ? and it works fine I try with [1.1] . Why did we choose the number 1.1 as the parameter for input vector?

  90. Zhenduo Wang July 19, 2018 at 1:57 am #

    It seems that your code is using GaussianNB without prior. The prior is obtained with MLE estimator of the training set. It is simply a constant multiplied by the computed likelihood probability. I tried both (with/without prior) and found that predicting with prior would give better results most of the time.

  91. ken stonecipher July 31, 2018 at 2:24 am #

    Jason, why do I get the error messate
    AttributeError: ‘dict’ object has no attribute ‘iteritems’ when I am trying the run the
    # Split the dataset by class values, returns a dictionary in the Naïve Bayes example in Chapter 12 of the Algorithms from Scratch in Python tutorial?



    • Jason Brownlee July 31, 2018 at 6:10 am #

      Sounds like you are using Python 3 for Python 2.7 code.

  92. ken stonecipher July 31, 2018 at 2:44 am #

    Jason, I figured it out. In Py 3.X does not understand the .interitem function so when I changed that to Py 2.7 .item it worked fine. Version difference between 3.X and 2.7


  93. tommy July 31, 2018 at 11:14 pm #


    Thank you for the helpful post.
    When i ran the code in python 3.6 i encountered the
    “ZeroDivisionError: float division by zero” ERROR
    any advise on this?

  94. jitesh pahwa August 2, 2018 at 4:34 pm #

    thank you Jason for your awesome post.

  95. Sheshank Kumar August 25, 2018 at 2:23 pm #

    I want to calculate the F1 Score. I can do this for binary classification problem. I am confused how to do it for multi class classification. I have calculated the confusion matrix for my data set. my data set contains three different classValue.

    Kindly suggest.

  96. Rajkumar August 31, 2018 at 7:22 pm #

    Thanks for the awesome work. P.S external link to the weka for naive bayes shown 404.

    Kind Regards

  97. Sajida September 29, 2018 at 10:27 pm #

    Hi Sir

    Great example. You are doing a great great work thanks. kindly also upload similar post related to hidden markov model

  98. Javier Lazaro October 3, 2018 at 12:12 am #

    Thanks for posting this nice algorithm explained. Nevertheless I struggled a lot until I found out that it is a Gaussian Naive Bayes version. I expected calculations of probabilities counting the prior, posterior, etc. It took me a while to figure it out. I have learnt a lot through it, though 🙂

  99. roba October 18, 2018 at 10:11 pm #

    How we fix this error in python 2.7

    return sum(numbers) / (len(numbers))
    TypeError: unsupported operand type(s) for +: ‘int’ and ‘str’

    thank you for your awesome post…

  100. Mustajab Hussain October 21, 2018 at 10:42 pm #

    Hi Jason. I am getting this error while running your code
    File “C:/Users/Mustajab/Desktop/ML Assignment/Naive”, line 191, in loadCsv
    dataset = list(lines)

    Error: iterator should return strings, not bytes (did you open the file in text mode?)

    How can I fix this error?

  101. dingusagar November 18, 2018 at 11:37 pm #

    P(A/B) = (P(B/A)P(A)) / P(B))

    This is the formula..

    In RHS of the formula we use only numerator because denominators are same for every class and doen’t add any extra info as to determine which probability is bigger.

    But I don’t thing the numerator is correctly implemented in the function.
    Specifically P(A) needs to be multiplied with the product of the conditional probabilities of the individual features belonging to a particular class.

    P(A) here is our class probability. That term is not multiplied to the final product inside the calculateClassProbabilities() function.

  102. Kody November 27, 2018 at 9:37 am #

    Something seems off here. Your inputs to the calculateProbability() function are x=71.5, mean=73.0, stdev=6.2. Some simple math will tell you that x is 0.24 standard deviations away from the mean. In a Gaussian distribution, you can’t get much closer than that. Yet the probability of belonging to that distribution is only 6%? Shouldn’t this be something more like 94%?

  103. mathis December 2, 2018 at 3:49 pm #

    hello Jason, I try to run this code with my own file but I get “ValueError: could not convert string to float: wtdeer”. do you know How can I fix it ?
    thank you so much

  104. FAW December 6, 2018 at 9:55 am #

    Nice tutorials Jason, however needs your attention for the typo in print statements, hope it will be fixed soon.

    Thanks anyways once again for providing such a nice explanation!

  105. faiza December 27, 2018 at 8:20 pm #

    File “C:/Users/user/Desktop/”, line 9, in
    dataset = loadCsv(filename)
    File “C:/Users/user/Desktop/”, line 4, in loadCsv
    dataset = list(lines)
    _csv.Error: iterator should return strings, not bytes (did you open the file in text mode?)
    solve this error

  106. Ahsan January 4, 2019 at 12:45 am #

    jason i have a question i want to do prediction heart disease and the result will be like this for example there are 9 heart disease heart failure heart stroke and more it will predicate the data and generate result like you have heart stroke disease so i have a question which classifier is best and

  107. Nyi Dewi Harta Putih January 6, 2019 at 1:32 pm #

    in code program, can you show to me which one that P(H|E), (P(E│H), P(H) ), (P(E)) as we know that it is formula of naive bayes classifier. would you like to tell me? because I need that reference for show to my lecture. thank you

  108. Michael Fu February 11, 2019 at 3:20 pm #


    I love your ‘building from scratch way’ of approaching machine learning algorithm, this is equally important as ‘understanding the algorithm’. Your implementations fulfilled the ‘building part’ which is sometimes understated in college classes.

  109. Antony Evmorfopoulos March 4, 2019 at 1:40 am #

    Very thorough step by step guide and your blog is one of the best out there for coding in machine learning.Naive bayes is a very strong algorithm but i find that its best usage is usually with other classifiers and voting between them and get the winner for the final classifier

  110. priya March 23, 2019 at 1:47 am #

    im getting an error
    could not convert string to float: id
    in the load Csv function

    • Jason Brownlee March 23, 2019 at 9:30 am #

      I believe the example requires Python 2. Perhaps confirm your version of Python?

  111. Rabin Ghimire March 26, 2019 at 3:54 pm #

    Why does this model predict values all 1 or all 0?

    • Jason Brownlee March 27, 2019 at 8:55 am #

      The model predicts probabilities and class values.

  112. Mhzed April 10, 2019 at 7:35 pm #

    Thanks for the in-depth tutorial.

    I re-implemented the code, but was able to get mostly 60+% accuracy. The best so far was 70% and rather a fluke. Your 76.8% result seems a bit out of my reach. The train/test data sets are randomly selected so it’s hard to be exact. I am just wondering if 60+% accuracy is to be expected or I am doing something wrong.

    • Mhzed April 10, 2019 at 7:46 pm #

      My bad, ignore my post. Mistake in code. 76% is the average result.

    • Jason Brownlee April 11, 2019 at 6:34 am #

      Double check that you copied all of the code exactly?

      • yared April 17, 2019 at 6:15 pm #

        it is not working could u help me pla

  113. Yared April 17, 2019 at 5:33 pm #

    when i execute the code it shows the following error, would you help me Please?
    Split{0}rows into train={1} and test={2} rows
    AttributeError Traceback (most recent call last)
    98 accuracy = getAccuracy(testSet, predictions)
    99 print(‘Accuracy: {0}%’).format(accuracy)
    –> 100 main()

    in main()
    91 dataset = loadCsv(data)
    92 trainingSet, testSet = splitDataset(dataset, splitRatio)
    —> 93 print(‘Split{0}rows into train={1} and test={2} rows’).format(len(dataset), len(trainingSet), len(testSet))
    94 # prepare model
    95 summaries = summarizeByClass(trainingSet)

    AttributeError: ‘NoneType’ object has no attribute ‘format’

  114. Júnior Pires May 12, 2019 at 9:37 am #

    One question:

    to calculate bayes function is used the (Prior probability * Density Function)/Total probability but in your algorithm you only calculate the Density Function and use it to make the predictions. Why? Im confuse.
    thanks for listening.

    • Júnior Pires May 13, 2019 at 12:48 am #

      I realized that the denominator can be omitted, but and about the prior probability? Shoudn’t I compute too?

    • Jason Brownlee May 13, 2019 at 6:42 am #

      I removed the prior because it was a constant in this case.

      • Júnior Pires May 13, 2019 at 6:48 am #

        It was a constant because of the dataset?

        • Jason Brownlee May 13, 2019 at 6:50 am #

          Correct, even number of observations for each class, e.g. fixed prior.

          • Júnior Pires May 13, 2019 at 6:52 am #

            Thank you for the explanation. 🙂

          • Jason Brownlee May 13, 2019 at 2:31 pm #

            You’re welcome.

  115. Maysoon alkhair June 2, 2019 at 6:55 pm #

    Hi, Great example. You are doing great work thanks.

    I have a question, can you help me to the modification your code for calculating precision and the recall so.

  116. Matthew June 19, 2019 at 5:22 pm #

    I’m afraid your responses to others show you’ve fundamentally misunderstood the prior. You do need to include the prior in your calculations, because the prior is different for each class, and depends on your training data–it’s the fraction of cases that fall into that class, i.e 500/768 for an outcome of 0 and 268/700 for an outcome of 1, if we used the entire data set. Image a case where you had one feature variable and its normal distributions were identical for each class–you’d still need to account for the ratio between the different classes in when a prediction.

    The only way you’d leave out the prior would be if each class had an equal number of data points in the training data, but the likelihood of getting 257 (=floor(768 * 0.67)/2) of each class in this instance is essentially zero.

    It’s easy to check this is true–just fit scikit-learn’s GaussianNB on your training data and check its score on it and your test data. If you don’t include the prior for each class, your results won’t match.

  117. Sanori Dayaratne July 23, 2019 at 1:51 pm #

    Hi Jason..I need to use Naive Bayes to get results for Sinhala language(Language used in Sri Lanka) for sentiment analyzing. I already have a pre processed and tagged data set with sentences. will I be able to get results using the above code?

  118. imanuel August 5, 2019 at 6:03 pm #

    hello, how do i apply this using debian on raprberry pi? especially for folder direction

  119. SuleymanSuleymanzade August 23, 2019 at 9:28 am #

    hello Jason

    in your separateByClass function
    instead of separated[vector[-1]].append(vector) there must be
    otherwise you append the class name to the features

  120. Srijtih September 1, 2019 at 7:26 pm #

    I have implemented the classifier with same idea but my own implementations and different dataset. Comparing the class conditional densities I am getting an accuracy above 70%. But once I try comparing posterior probabilities, accuracy is close to 50% only.. Am I doing anything wrong or is it supposed to come less?

  121. Sam September 12, 2019 at 11:17 pm #

    thanks for the wonderful article
    i might missed this
    but how you calculate the marginal probability??

    • Jason Brownlee September 13, 2019 at 5:43 am #

      We skipped the prior as the classes were even, e..g it was a constant.

  122. vira September 16, 2019 at 5:46 pm #

    such a wonderful basic article …
    thank you so much jason.

Leave a Reply