Tutorial To Implement k-Nearest Neighbors in Python From Scratch

The k-Nearest Neighbors algorithm (or kNN for short) is an easy algorithm to understand and to implement, and a powerful tool to have at your disposal.

In this tutorial you will implement the k-Nearest Neighbors algorithm from scratch in Python (2.7). The implementation will be specific for classification problems and will be demonstrated using the Iris flowers classification problem.

This tutorial is for you if you are a Python programmer, or a programmer who can pick-up python quickly, and you are interested in how to implement the k-Nearest Neighbors algorithm from scratch.

k-Nearest Neighbors algorithm

k-Nearest Neighbors algorithm
Image from Wikipedia, all rights reserved

What is k-Nearest Neighbors

The model for kNN is the entire training dataset. When a prediction is required for a unseen data instance, the kNN algorithm will search through the training dataset for the k-most similar instances. The prediction attribute of the most similar instances is summarized and returned as the prediction for the unseen instance.

The similarity measure is dependent on the type of data. For real-valued data, the Euclidean distance can be used. Other other types of data such as categorical or binary data, Hamming distance can be used.

In the case of regression problems, the average of the predicted attribute may be returned. In the case of classification, the most prevalent class may be returned.

How does k-Nearest Neighbors Work

The kNN algorithm is belongs to the family of instance-based, competitive learning and lazy learning algorithms.

Instance-based algorithms are those algorithms that model the problem using data instances (or rows) in order to make predictive decisions. The kNN algorithm is an extreme form of instance-based methods because all training observations are retained as part of the model.

It is a competitive learning algorithm, because it internally uses competition between model elements (data instances) in order to make a predictive decision. The objective similarity measure between data instances causes each data instance to compete to “win” or be most similar to a given unseen data instance and contribute to a prediction.

Lazy learning refers to the fact that the algorithm does not build a model until the time that a prediction is required. It is lazy because it only does work at the last second. This has the benefit of only including data relevant to the unseen data, called a localized model. A disadvantage is that it can be computationally expensive to repeat the same or similar searches over larger training datasets.

Finally, kNN is powerful because it does not assume anything about the data, other than a distance measure can be calculated consistently between any two instances. As such, it is called non-parametric or non-linear as it does not assume a functional form.

Get your FREE Algorithms Mind Map

Machine Learning Algorithms Mind Map

Sample of the handy machine learning algorithms mind map.

I've created a handy mind map of 60+ algorithms organized by type.

Download it, print it and use it. 

Download For Free

Also get exclusive access to the machine learning algorithms email mini-course.



Classify Flowers Using Measurements

The test problem we will be using in this tutorial is iris classification.

The problem is comprised of 150 observations of iris flowers from three different species. There are 4 measurements of given flowers: sepal length, sepal width, petal length and petal width, all in the same unit of centimeters. The predicted attribute is the species, which is one of setosa, versicolor or virginica.

It is a standard dataset where the species is known for all instances. As such we can split the data into training and test datasets and use the results to evaluate our algorithm implementation. Good classification accuracy on this problem is above 90% correct, typically 96% or better.

You can download the dataset for free from iris.data, see the resources section for further details.

How to implement k-Nearest Neighbors in Python

This tutorial is broken down into the following steps:

  1. Handle Data: Open the dataset from CSV and split into test/train datasets.
  2. Similarity: Calculate the distance between two data instances.
  3. Neighbors: Locate k most similar data instances.
  4. Response: Generate a response from a set of data instances.
  5. Accuracy: Summarize the accuracy of predictions.
  6. Main: Tie it all together.

1. Handle Data

The first thing we need to do is load our data file. The data is in CSV format without a header line or any quotes. We can open the file with the open function and read the data lines using the reader function in the csv module.

Next we need to split the data into a training dataset that kNN can use to make predictions and a test dataset that we can use to evaluate the accuracy of the model.

We first need to convert the flower measures that were loaded as strings into numbers that we can work with. Next we need to split the data set randomly into train and datasets. A ratio of 67/33 for train/test is a standard ratio used.

Pulling it all together, we can define a function called loadDataset that loads a CSV with the provided filename and splits it randomly into train and test datasets using the provided split ratio.

Download the iris flowers dataset CSV file to the local directory. We can test this function out with our iris dataset, as follows:

2. Similarity

In order to make predictions we need to calculate the similarity between any two given data instances. This is needed so that we can locate the k most similar data instances in the training dataset for a given member of the test dataset and in turn make a prediction.

Given that all four flower measurements are numeric and have the same units, we can directly use the Euclidean distance measure. This is defined as the square root of the sum of the squared differences between the two arrays of numbers (read that again a few times and let it sink in).

Additionally, we want to control which fields to include in the distance calculation. Specifically, we only want to include the first 4 attributes. One approach is to limit the euclidean distance to a fixed length, ignoring the final dimension.

Putting all of this together we can define the euclideanDistance function as follows:

We can test this function with some sample data, as follows:

3. Neighbors

Now that we have a similarity measure, we can use it collect the k most similar instances for a given unseen instance.

This is a straight forward process of calculating the distance for all instances and selecting a subset with the smallest distance values.

Below is the getNeighbors function that returns k most similar neighbors from the training set for a given test instance (using the already defined euclideanDistance function)

We can test out this function as follows:

4. Response

Once we have located the most similar neighbors for a test instance, the next task is to devise a predicted response based on those neighbors.

We can do this by allowing each neighbor to vote for their class attribute, and take the majority vote as the prediction.

Below provides a function for getting the majority voted response from a number of neighbors. It assumes the class is the last attribute for each neighbor.

We can test out this function with some test neighbors, as follows:

This approach returns one response in the case of a draw, but you could handle such cases in a specific way, such as returning no response or selecting an unbiased random response.

5. Accuracy

We have all of the pieces of the kNN algorithm in place. An important remaining concern is how to evaluate the accuracy of predictions.

An easy way to evaluate the accuracy of the model is to calculate a ratio of the total correct predictions out of all predictions made, called the classification accuracy.

Below is the getAccuracy function that sums the total correct predictions and returns the accuracy as a percentage of correct classifications.

We can test this function with a test dataset and predictions, as follows:

6. Main

We now have all the elements of the algorithm and we can tie them together with a main function.

Below is the complete example of implementing the kNN algorithm from scratch in Python.

Running the example, you will see the results of each prediction compared to the actual class value in the test set. At the end of the run, you will see the accuracy of the model. In this case, a little over 98%.

Ideas For Extensions

This section provides you with ideas for extensions that you could apply and investigate with the Python code you have implemented as part of this tutorial.

  • Regression: You could adapt the implementation to work for regression problems (predicting a real-valued attribute). The summarization of the closest instances could involve taking the mean or the median of the predicted attribute.
  • Normalization: When the units of measure differ between attributes, it is possible for attributes to dominate in their contribution to the distance measure. For these types of problems, you will want to rescale all data attributes into the range 0-1 (called normalization) before calculating similarity. Update the model to support data normalization.
  • Alternative Distance Measure: There are many distance measures available, and you can even develop your own domain-specific distance measures if you like. Implement an alternative distance measure, such as Manhattan distance or the vector dot product.

There are many more extensions to this algorithm you might like to explore. Two additional ideas include support for distance-weighted contribution for the k-most similar instances to the prediction and more advanced data tree-based structures for searching for similar instances.

Resource To Learn More

This section will provide some resources that you can use to learn more about the k-Nearest Neighbors algorithm in terms of both theory of how and why it works and practical concerns for implementing it in code.



This section links to open source implementations of kNN in popular machine learning libraries. Review these if you are considering implementing your own version of the method for operational use.


You may have one or more books on applied machine learning. This section highlights the sections or chapters in common applied books on machine learning that refer to k-Nearest Neighbors.

Tutorial Summary

In this tutorial you learned about the k-Nearest Neighbor algorithm, how it works and some metaphors that you can use to think about the algorithm and relate it to other algorithms. You implemented the kNN algorithm in Python from scratch in such a way that you understand every line of code and can adapt the implementation to explore extensions and to meet your own project needs.

Below are the 5 key learnings from this tutorial:

  • k-Nearest Neighbor: A simple algorithm to understand and implement, and a powerful non-parametric method.
  • Instanced-based method: Model the problem using data instances (observations).
  • Competitive-learning: Learning and predictive decisions are made by internal competition between model elements.
  • Lazy-learning: A model is not constructed until it is needed in order to make a prediction.
  • Similarity Measure: Calculating objective distance measures between data instances is a key feature of the algorithm.

Did you implement kNN using this tutorial? How did you go? What did you learn?

Frustrated With Machine Learning Math?

See How Algorithms Work in Minutes

...with just arithmetic and simple examples

Discover how in my new Ebook: Master Machine Learning Algorithms

It covers explanations and examples of 10 top algorithms, including:
Linear Regression, k-Nearest Neighbors, Support Vector Machines and much more...

Finally, Pull Back the Curtain on
Machine Learning Algorithms

Skip the Academics. Just Results.

Click to learn more.

69 Responses to Tutorial To Implement k-Nearest Neighbors in Python From Scratch

  1. Damian Mingle September 12, 2014 at 10:22 pm #

    Jason –

    I appreciate your step-by-step approach. Your explanation makes this material accessible for a wide audience.

    Keep up the great contributions.

    • jasonb September 13, 2014 at 7:48 am #

      Thanks Damian!

  2. Pete Fry September 13, 2014 at 6:56 am #

    A very interesting and clear article. I haven’t tried it out yet but will over the weekend.

    • jasonb September 13, 2014 at 7:48 am #

      Thanks Pete, let me know how you go.

  3. Alan September 13, 2014 at 3:40 pm #

    Hey Jason, I’ve ploughed through multiple books and tutorials but your explanation helped me to finally understand what I was doing.

    Looking forward to more of your tutorials.

  4. Vadim September 15, 2014 at 8:16 pm #

    Hey Jason!
    Thank you for awesome article!
    Clear and straight forward explanation. I finaly understood the background under kNN.

    There’s some code errors in the article.
    1) in getResponse it should be “return sortedVote[0]” instead sortedVotes[0][0]
    2) in getAccuracy it should be “testSet[x][-1] IN predictions[x]” instead of IS.

    • jasonb September 16, 2014 at 8:04 am #

      Thanks Vadim!

      I think the code is right, but perhaps I misunderstood your comments.

      If you change getResponse to return sortedVote[0] you will get the class and the count. We don’t want this, we just want the class.

      In getAccuracy, I am interested in an equality between the class strings (is), not a set operation (in).

      Does that make sense?

  5. Mario September 19, 2014 at 12:29 am #

    Thank you very much for this example!

    • jasonb September 19, 2014 at 5:33 am #

      You’re welcome Mario.

  6. PVA September 25, 2014 at 4:27 pm #

    Thank you for the post on kNN implementation..

    Any pointers on normalization will be greatly appreciated ?

    What if the set of features includes fields like name, age, DOB, ID ? What are good algorithms to normalize such features ?

  7. Landry September 26, 2014 at 4:46 am #

    A million thanks !

    I’ve had so many starting points for my ML journey, but few have been this clear.

    Merci !

    • jasonb September 26, 2014 at 5:44 am #

      Glad to here it Landry!

  8. kumaran November 7, 2014 at 7:37 pm #

    when i run the code it shows

    ValueError: could not convert string to float: ‘sepallength’

    what should i do to run the program.

    please help me out as soon as early….

    thanks in advance…

    • jasonb November 8, 2014 at 2:50 pm #

      Hi kumaran,

      I believe the example code still works just fine. If I copy-paste the code from the tutorial into a new file called knn.py and download iris.data into the same directory, the example runs fine for me using Python 2.7.

      Did you modify the example in some way perhaps?

  9. kumaran November 11, 2014 at 3:51 pm #

    Hi jabson ,
    Thanks for your reply..

    I am using Anaconda IDE 3.4 .
    yes it works well for the iris dataset If i try to put some other dataset it shows value error because those datasets contains strings along with the integers..
    example forestfire datasets.

    X Y month day FFMC DMC DC ISI temp RH wind rain area
    7 5 mar fri 86.2 26.2 94.3 5.1 8.2 51 6.7 0 0
    7 4 oct tue 90.6 35.4 669.1 6.7 18 33 0.9 0 0
    7 4 oct sat 90.6 43.7 686.9 6.7 14.6 33 1.3 0 0
    8 6 mar fri 91.7 33.3 77.5 9 8.3 97 4 0.2 0
    8 6 mar sun 89.3 51.3 102.2 9.6 11.4 99 1.8 0 0

    Is it possible to classify these datasets also with your code??
    please provide me if some other classifer code example in python…

  10. sanksh November 30, 2014 at 9:09 am #

    Excellent article on knn. It made the concepts so clear.

  11. rvaquerizo December 5, 2014 at 3:18 am #

    I like how it is explained, simply and clear. Great job.

  12. Lakshminarasu Chenduri December 31, 2014 at 7:00 pm #

    Great article Jason !! Crisp and Clear.

  13. Raju Neve January 16, 2015 at 4:31 am #

    Nice artical Jason. I am a software engineer new to ML. Your step by step approach made learning easy and fun. Though Python was new to me, it became very easy since I could run small small snippet instead of try to understand the entire program in once.
    Appreciate your hardwork. Keep it up.

  14. ZHANG CHI January 29, 2015 at 2:33 pm #

    It’s really fantastic for me. I can’t find a better one

  15. ZHANG CHI January 29, 2015 at 7:34 pm #

    I also face the same problem with Kumaran. After checking, I think the problem “can’t convert string into float” is that the first row is “sepal_length” and so on. Python can’t convert it since it’s totally string. So just delete it or change the code a little.

  16. RK March 1, 2015 at 2:28 pm #


    Many thanks for this details article. Any clue for the extension Ideas?


  17. Andy March 17, 2015 at 9:29 am #

    Hi – I was wondering how we can have the data fed into the system without randomly shuffling as I am trying to make a prediction on the final line of data?

    Do we remove:

    if random.random() < split

    and replace with something like:

    if len(trainingSet)/len(dataset) < split
    # if < 0.67 then append to the training set, otherwise append to test set

    The reason I ask is that I know what data I want to predict and with this it seems that it could use the data I want to predict within the training set due to the random selection process.

    • Gerry May 26, 2015 at 2:22 pm #

      I also have the same dilemma as you, I performed trial and error, right now I cant seem to make things right which code be omitted to create a prediction.

      I am not a software engineer nor I have a background in computer science. I am pretty new to data science and ML as well, I just started learning Python and R but the experience is GREAT!

      Thanks so much for this Jason!

  18. Brian April 9, 2015 at 11:00 am #

    This article was absolutely gorgeous. As a computational physicist grad student who has taken an interest in machine learning this was the perfect level to skim, get my hands dirty and have some fun.

    Thank you so much for the article on this. I’m excited to see the rest of your site.

  19. Clinton May 22, 2015 at 12:09 am #

    Thanks for the article!

  20. Vitali July 3, 2015 at 7:26 pm #

    I wished to write my own knn python program, and that was really helpful !

    Thanks a lot for sharing this.

    One thing you didn’t mention though is how you chose k=3.

    To get a feeling of how sensitive is the accuracy % to k, i wrote a “screening” function that iterates over k on the training set using leave-one-out cross validation accuracy % as a ranking.

    Would you have any other suggestions ?

  21. Pacu Ignis July 27, 2015 at 9:50 pm #

    This is really really helpful. Thanks man !!

  22. Mark September 4, 2015 at 9:17 pm #

    An incredibly useful tutorial, Jason. Thank you for this.

    Please could you show me how you would modify your code to work with a data set which comprises strings (i.e. text) and not numerical values?

    I’m really keen to try this algorithm on text data but can’t seem to find a decent article on line.

    Your help is much appreciated.


  23. Max Buck October 3, 2015 at 7:38 am #

    Nice tutorial! Very helpful in explaining KNN — python is so much easier to understand than the mathematical operations. One thing though — the way the range function works for Python is that the final element is not included.

    In loadDataset() you have

    for x in range(len(dataset)-1):

    This should simply be:

    for x in range(len(dataset)):

    otherwise the last row of data is omitted!

  24. Azi November 5, 2015 at 9:26 am #

    Thank you so much

  25. mulkan November 7, 2015 at 1:56 pm #

    thank very much

  26. Gleb November 17, 2015 at 1:11 am #

    That’s great! I’ve tried so many books and articles to start learning ML. Your article is the first clear one! Thank you a lot! Please, keep teaching us!)

  27. Jakob November 29, 2015 at 3:25 pm #

    Hi Jason,

    Thanks for this amazing introduction! I have two questions that relate to my study on this.

    First is, how is optimization implemented in this code?

    Second is, what is the strength of the induction this algorithm is making as explained above, will this is be a useful induction for a thinking machine?

    Thank you so much!

  28. erlik December 1, 2015 at 4:31 am #

    HI jason;

    it is great tutorial it help me alot thanks for great effort but i have queastion what if i want to split the data in to randomly 100 training set and 50 test set and i want to generate in separate file with there values instead of printing total numbers? becaouse i want to test them in hugin

    thank you so much!

  29. İdil December 3, 2015 at 8:36 am #

    Hi Jason,

    It is a really great tutorial. Your article is so clear, but I have a problem.
    When I run code, I see the right classification.
    > predicted=’Iris-virginica’, actual=’Iris-virginica’
    > predicted=’Iris-virginica’, actual=’Iris-virginica’
    > predicted=’Iris-virginica’, actual=’Iris-virginica’
    > predicted=’Iris-virginica’, actual=’Iris-virginica’

    However, accuracy is 0%. I run accuracy test but there is no problem with code.
    How can I fix the accuracy? Where do I make mistake?

    Thanks for reply and your helps.

    • jxprat January 14, 2016 at 12:11 am #

      Hi, I solved this doing this:

      Originaly, on the step 5, in the function getAccuracy you have:

      for x in range(len(testSet)):
      if testSet[x][-1] is predictions[x]:
      correct += 1

      The key here is in the IF statement:

      if testSet[x][-1] is predictions[x]:

      Change “IS” to “==” so the getAccuracy now is:

      for x in range(len(testSet)):
      if testSet[x][-1] == predictions[x]:
      correct += 1

      That solve the problem and works ok!!

  30. Renjith Madhavan December 9, 2015 at 7:26 am #

    I think setting the value of K plays an important role in the accuracy of the prediction. How to determine the best value of ‘K’ . Please suggest some best practices ?

  31. Sagar kumar February 9, 2016 at 5:33 am #

    Dear, How to do it for muticlass classifcation with data in excelsheet: images of digits(not handwritten) and label of that image in corresponding next column of excel ??

    Your this tutorial is totally on numeric data, just gave me the idea with images.

  32. Jack February 24, 2016 at 8:59 am #

    Very clear explanation and step by step working make this very understandable. I am not sure why the list sortedVotes within the function getResponse is reversed, I thought getResponse is meant to return the most common key in the dictionary classVotes. If you reverse the list, doesn’t this return the least common key in the dictionary?

  33. kamal March 9, 2016 at 3:07 pm #

    I do not know how to take the k nearest neighbour for 3 classes for ties vote for example [1,1,2,2,0]. Since for two classes, with k=odd values, we do find the maximum vote for the two classes but ties happens if we choose three classes.

    Thanks in advance

  34. I.T.Cheema March 11, 2016 at 11:31 pm #

    thanks for this great effort buddy
    i have some basic questions:
    1: i opened “iris.data’ file and it is simply in html window. how to download?
    2: if do a copy paste technique from html page. where to copy paste?

    • Jason Brownlee March 12, 2016 at 8:41 am #

      You can use File->Save as in your browser to save the file or copy the text and paste it int a new file and save it as the file “iris.data” expected by the tutorial.

      I hope that helps.


  35. Hrishikesh Kulkarni March 21, 2016 at 5:00 pm #

    This is a really simple but thorough explaination. Thanks for the efforts.
    Could you suggest me how to draw a scatter plot for the 3 classes. It will be really great if you could upload the code. Thanks in advance!

  36. Mohammed Farhan April 22, 2016 at 1:34 am #

    What if we want to classify text into categories using KNN,
    e.g a given paragraph of text defines {Politics,Sports,Technology}

    I’m Working on a project to Classify RSS Feeds

  37. Lyazzat May 19, 2016 at 1:41 pm #

    How to download the file without using library csv at the first stage?

  38. Avinash June 8, 2016 at 7:00 pm #

    Nice explanation Jason.. Really appreciate your work..

  39. Agnes July 10, 2016 at 1:08 am #

    Hi! Really comprehensive tutorial, i loved it!
    What will you do if some features are more important than others to determine the right class ?

  40. Dev July 10, 2016 at 10:48 am #

    I get this error message.
    Train set: 78
    Test set: 21
    TypeError Traceback (most recent call last)
    in ()
    72 print(‘Accuracy: ‘ + repr(accuracy) + ‘%’)
    —> 74 main()

    in main()
    65 k = 3
    66 for x in range(len(testSet)):
    —> 67 neighbors = getNeighbors(trainingSet, testSet[x], k)
    68 result = getResponse(neighbors)
    69 predictions.append(result)

    in getNeighbors(trainingSet, testInstance, k)
    27 length = len(testInstance)-1
    28 for x in range(len(trainingSet)):
    —> 29 dist = euclideanDistance(testInstance, trainingSet[x], length)
    30 distances.append((trainingSet[x], dist))
    31 distances.sort(key=operator.itemgetter(1))

    in euclideanDistance(instance1, instance2, length)
    20 distance = 0
    21 for x in range(length):
    —> 22 distance += pow(float(instance1[x] – instance2[x]), 2)
    23 return math.sqrt(distance)

    TypeError: unsupported operand type(s) for -: ‘str’ and ‘str’

    Can you please help.

    Thank you

    • Jason Brownlee July 10, 2016 at 2:21 pm #

      It is not clear, it might be a copy-paste error from the post?

      • Dev July 11, 2016 at 12:40 am #

        Thank you for your answer,

        as if i can’t do the subtraction here is the error message

        TypeError: unsupported operand type(s) for -: ‘str’ and ‘str’
        and i copy/past the code directly from the tutorial

  41. temi Noah July 14, 2016 at 12:10 am #

    am so happy to be able to extend my gratitude to you.Have searched for good books to explain machine learning(KNN) but those i came across was not as clear and simple as this brilliant and awesome step by step explanation.Indeed you are a distinguished teacher

  42. tejas zarekar July 24, 2016 at 8:12 pm #

    hi Jason, i really want to get into Machine learning. I want to make a big project for my final year of computer engg. which i am currently in. People are really enervating that way by saying that its too far fetched for a bachelor. I want to prove them wrong. I don’t have much time (6 months from today). I really want to make something useful. Can you send me some links that can help me settle on a project with machine learning? PLZ … TYSM

  43. naveen August 19, 2016 at 3:38 pm #

    import numpy as np
    from sklearn import preprocessing, cross_validation, neighbors
    import pandas as pd
    df= np.genfromtxt(‘/home/reverse/Desktop/acs.txt’, delimiter=’,’)
    X= np.array(df[:,1])
    y= np.array(df[:,0])
    X_train, X_test, y_train, y_test = cross_validation.train_test_split(X,y,test_size=0.2)
    clf = neighbors.KNeighborsClassifier()
    clf.fit(X_train, y_train)

    ValueError: Found arrays with inconsistent numbers of samples: [ 1 483]

    Then I tried to reshape using this code: df.reshape((483,1))

    Again i am getting this error “ValueError: total size of new array must be unchanged”

    Advance thanks ….

  44. Carolina October 16, 2016 at 5:48 am #

    Hi Jason,

    great tutorial, very easy to follow. Thanks!

    One question though. You wrote:

    “Additionally, we want to control which fields to include in the distance calculation. Specifically, we only want to include the first 4 attributes. One approach is to limit the euclidean distance to a fixed length, ignoring the final dimension.”

    Can you explain in more detail what you mean here? Why is the final dimension ignored when we want to include all 4 attributes?

    Thanks a lot,

    • Jason Brownlee October 17, 2016 at 10:25 am #

      The gist of the paragraph is that we only want to calculate distance on input variables and exclude the output variable.

      The reason is when we have new data, we will not have the output variable, only input variables. Our job will be to find the k most similar instances to the new data and discover the output variable to predict.

      In the specific case, the iris dataset has 4 input variables and the 5th is the class. We only want to calculate distance using the first 4 variables.

      I hope that makes things clearer.

  45. Pranav Gundewar October 17, 2016 at 7:09 pm #

    Hi Jason! The steps u showed are great. Do you any article regarding the same in matlab.
    Thank you.

    • Jason Brownlee October 18, 2016 at 5:53 am #

      Thanks Pranav,

      Sorry I don’t have Matlab examples at this stage.

  46. Sara October 18, 2016 at 7:16 pm #

    Best algorithm tutorial I have ever seen! Thanks a lot!

Leave a Reply